Search Results: "pm"

15 January 2024

Colin Watson: OpenUK New Year s Honours

Apparently I got an honour from OpenUK. There are a bunch of people I know on that list. Chris Lamb and Mark Brown are familiar names from Debian. Colin King and Jonathan Riddell are people I know from past work in Ubuntu. I ve admired David MacIver s work on Hypothesis and Richard Hughes work on firmware updates from afar. And there are a bunch of other excellent projects represented there: OpenStreetMap, Textualize, and my alma mater of Cambridge to name but a few. My friend Stuart Langridge wrote about being on a similar list a few years ago, and I can t do much better than to echo it: in particular he wrote about the way the open source development community is often at best unwelcoming to people who don t look like Stuart and I do. I can t tell a whole lot about demographic distribution just by looking at a list of names, but while these honours still seem to be skewed somewhat male, I m fairly sure they re doing a lot better in terms of gender balance than my home project of Debian is, for one. I hope this is a sign of improvement for the future, and I ll do what I can to pay it forward.

11 January 2024

Matthias Klumpp: Wayland really breaks things Just for now?

This post is in part a response to an aspect of Nate s post Does Wayland really break everything? , but also my reflection on discussing Wayland protocol additions, a unique pleasure that I have been involved with for the past months1.

Some facts Before I start I want to make a few things clear: The Linux desktop will be moving to Wayland2 this is a fact at this point (and has been for a while), sticking to X11 makes no sense for future projects. From reading Wayland protocols and working with it at a much lower level than I ever wanted to, it is also very clear to me that Wayland is an exceptionally well-designed core protocol, and so are the additional extension protocols (xdg-shell & Co.). The modularity of Wayland is great, it gives it incredible flexibility and will for sure turn out to be good for the long-term viability of this project (and also provides a path to correct protocol issues in future, if one is found). In other words: Wayland is an amazing foundation to build on, and a lot of its design decisions make a lot of sense! The shift towards people seeing Linux more as an application developer platform, and taking PipeWire and XDG Portals into account when designing for Wayland is also an amazing development and I love to see this this holistic approach is something I always wanted! Furthermore, I think Wayland removes a lot of functionality that shouldn t exist in a modern compositor and that s a good thing too! Some of X11 s features and design decisions had clear drawbacks that we shouldn t replicate. I highly recommend to read Nate s blog post, it s very good and goes into more detail. And due to all of this, I firmly believe that any advancement in the Wayland space must come from within the project.

But! But! Of course there was a but coming  I think while developing Wayland-as-an-ecosystem we are now entrenched into narrow concepts of how a desktop should work. While discussing Wayland protocol additions, a lot of concepts clash, people from different desktops with different design philosophies debate the merits of those over and over again never reaching any conclusion (just as you will never get an answer out of humans whether sushi or pizza is the clearly superior food, or whether CSD or SSD is better). Some people want to use Wayland as a vehicle to force applications to submit to their desktop s design philosophies, others prefer the smallest and leanest protocol possible, other developers want the most elegant behavior possible. To be clear, I think those are all very valid approaches. But this also creates problems: By switching to Wayland compositors, we are already forcing a lot of porting work onto toolkit developers and application developers. This is annoying, but just work that has to be done. It becomes frustrating though if Wayland provides toolkits with absolutely no way to reach their goal in any reasonable way. For Nate s Photoshop analogy: Of course Linux does not break Photoshop, it is Adobe s responsibility to port it. But what if Linux was missing a crucial syscall that Photoshop needed for proper functionality and Adobe couldn t port it without that? In that case it becomes much less clear on who is to blame for Photoshop not being available. A lot of Wayland protocol work is focused on the environment and design, while applications and work to port them often is considered less. I think this happens because the overlap between application developers and developers of the desktop environments is not necessarily large, and the overlap with people willing to engage with Wayland upstream is even smaller. The combination of Windows developers porting apps to Linux and having involvement with toolkits or Wayland is pretty much nonexistent. So they have less of a voice.

A quick detour through the neuroscience research lab I have been involved with Freedesktop, GNOME and KDE for an incredibly long time now (more than a decade), but my actual job (besides consulting for Purism) is that of a PhD candidate in a neuroscience research lab (working on the morphology of biological neurons and its relation to behavior). I am mostly involved with three research groups in our institute, which is about 35 people. Most of us do all our data analysis on powerful servers which we connect to using RDP (with KDE Plasma as desktop). Since I joined, I have been pushing the envelope a bit to extend Linux usage to data acquisition and regular clients, and to have our data acquisition hardware interface well with it. Linux brings some unique advantages for use in research, besides the obvious one of having every step of your data management platform introspectable with no black boxes left, a goal I value very highly in research (but this would be its own blogpost). In terms of operating system usage though, most systems are still Windows-based. Windows is what companies develop for, and what people use by default and are familiar with. The choice of operating system is very strongly driven by application availability, and WSL being really good makes this somewhat worse, as it removes the need for people to switch to a real Linux system entirely if there is the occasional software requiring it. Yet, we have a lot more Linux users than before, and use it in many places where it makes sense. I also developed a novel data acquisition software that even runs on Linux-only and uses the abilities of the platform to its fullest extent. All of this resulted in me asking existing software and hardware vendors for Linux support a lot more often. Vendor-customer relationship in science is usually pretty good, and vendors do usually want to help out. Same for open source projects, especially if you offer to do Linux porting work for them But overall, the ease of use and availability of required applications and their usability rules supreme. Most people are not technically knowledgeable and just want to get their research done in the best way possible, getting the best results with the least amount of friction.
KDE/Linux usage at a control station for a particle accelerator at Adlershof Technology Park, Germany, for reference (by 25years of KDE)3

Back to the point The point of that story is this: GNOME, KDE, RHEL, Debian or Ubuntu: They all do not matter if the necessary applications are not available for them. And as soon as they are, the easiest-to-use solution wins. There are many facets of easiest : In many cases this is RHEL due to Red Hat support contracts being available, in many other cases it is Ubuntu due to its mindshare and ease of use. KDE Plasma is also frequently seen, as it is perceived a bit easier to onboard Windows users with it (among other benefits). Ultimately, it comes down to applications and 3rd-party support though. Here s a dirty secret: In many cases, porting an application to Linux is not that difficult. The thing that companies (and FLOSS projects too!) struggle with and will calculate the merits of carefully in advance is whether it is worth the support cost as well as continuous QA/testing. Their staff will have to do all of that work, and they could spend that time on other tasks after all. So if they learn that porting to Linux not only means added testing and support, but also means to choose between the legacy X11 display server that allows for 1:1 porting from Windows or the new Wayland compositors that do not support the same features they need, they will quickly consider it not worth the effort at all. I have seen this happen. Of course many apps use a cross-platform toolkit like Qt, which greatly simplifies porting. But this just moves the issue one layer down, as now the toolkit needs to abstract Windows, macOS and Wayland. And Wayland does not contain features to do certain things or does them very differently from e.g. Windows, so toolkits have no way to actually implement the existing functionality in a way that works on all platforms. So in Qt s documentation you will often find texts like works everywhere except for on Wayland compositors or mobile 4. Many missing bits or altered behavior are just papercuts, but those add up. And if users will have a worse experience, this will translate to more support work, or people not wanting to use the software on the respective platform.

What s missing?

Window positioning SDI applications with multiple windows are very popular in the scientific world. For data acquisition (for example with microscopes) we often have one monitor with control elements and one larger one with the recorded image. There is also other configurations where multiple signal modalities are acquired, and the experimenter aligns windows exactly in the way they want and expects the layout to be stored and to be loaded upon reopening the application. Even in the image from Adlershof Technology Park above you can see this style of UI design, at mega-scale. Being able to pop-out elements as windows from a single-window application to move them around freely is another frequently used paradigm, and immensely useful with these complex apps. It is important to note that this is not a legacy design, but in many cases an intentional choice these kinds of apps work incredibly well on larger screens or many screens and are very flexible (you can have any window configuration you want, and switch between them using the (usually) great window management abilities of your desktop). Of course, these apps will work terribly on tablets and small form factors, but that is not the purpose they were designed for and nobody would use them that way. I assumed for sure these features would be implemented at some point, but when it became clear that that would not happen, I created the ext-placement protocol which had some good discussion but was ultimately rejected from the xdg namespace. I then tried another solution based on feedback, which turned out not to work for most apps, and now proposed xdg-placement (v2) in an attempt to maybe still get some protocol done that we can agree on, exploring more options before pushing the existing protocol for inclusion into the ext Wayland protocol namespace. Meanwhile though, we can not port any application that needs this feature, while at the same time we are switching desktops and distributions to Wayland by default.

Window position restoration Similarly, a protocol to save & restore window positions was already proposed in 2018, 6 years ago now, but it has still not been agreed upon, and may not even help multiwindow apps in its current form. The absence of this protocol means that applications can not restore their former window positions, and the user has to move them to their previous place again and again. Meanwhile, toolkits can not adopt these protocols and applications can not use them and can not be ported to Wayland without introducing papercuts.

Window icons Similarly, individual windows can not set their own icons, and not-installed applications can not have an icon at all because there is no desktop-entry file to load the icon from and no icon in the theme for them. You would think this is a niche issue, but for applications that create many windows, providing icons for them so the user can find them is fairly important. Of course it s not the end of the world if every window has the same icon, but it s one of those papercuts that make the software slightly less user-friendly. Even applications with fewer windows like LibrePCB are affected, so much so that they rather run their app through Xwayland for now. I decided to address this after I was working on data analysis of image data in a Python virtualenv, where my code and the Python libraries used created lots of windows all with the default yellow W icon, making it impossible to distinguish them at a glance. This is xdg-toplevel-icon now, but of course it is an uphill battle where the very premise of needing this is questioned. So applications can not use it yet.

Limited window abilities requiring specialized protocols Firefox has a picture-in-picture feature, allowing it to pop out media from a mediaplayer as separate floating window so the user can watch the media while doing other things. On X11 this is easily realized, but on Wayland the restrictions posed on windows necessitate a different solution. The xdg-pip protocol was proposed for this specialized usecase, but it is also not merged yet. So this feature does not work as well on Wayland.

Automated GUI testing / accessibility / automation Automation of GUI tasks is a powerful feature, so is the ability to auto-test GUIs. This is being worked on, with libei and wlheadless-run (and stuff like ydotool exists too), but we re not fully there yet.

Wayland is frustrating for (some) application authors As you see, there is valid applications and valid usecases that can not be ported yet to Wayland with the same feature range they enjoyed on X11, Windows or macOS. So, from an application author s perspective, Wayland does break things quite significantly, because things that worked before can no longer work and Wayland (the whole stack) does not provide any avenue to achieve the same result. Wayland does break screen sharing, global hotkeys, gaming latency (via no tearing ) etc, however for all of these there are solutions available that application authors can port to. And most developers will gladly do that work, especially since the newer APIs are usually a lot better and more robust. But if you give application authors no path forward except use Xwayland and be on emulation as second-class citizen forever , it just results in very frustrated application developers. For some application developers, switching to a Wayland compositor is like buying a canvas from the Linux shop that forces your brush to only draw triangles. But maybe for your avant-garde art, you need to draw a circle. You can approximate one with triangles, but it will never be as good as the artwork of your friends who got their canvases from the Windows or macOS art supply shop and have more freedom to create their art.

Triangles are proven to be the best shape! If you are drawing circles you are creating bad art! Wayland, via its protocol limitations, forces a certain way to build application UX often for the better, but also sometimes to the detriment of users and applications. The protocols are often fairly opinionated, a result of the lessons learned from X11. In any case though, it is the odd one out Windows and macOS do not pose the same limitations (for better or worse!), and the effort to port to Wayland is orders of magnitude bigger, or sometimes in case of the multiwindow UI paradigm impossible to achieve to the same level of polish. Desktop environments of course have a design philosophy that they want to push, and want applications to integrate as much as possible (same as macOS and Windows!). However, there are many applications out there, and pushing a design via protocol limitations will likely just result in fewer apps.

The porting dilemma I spent probably way too much time looking into how to get applications cross-platform and running on Linux, often talking to vendors (FLOSS and proprietary) as well. Wayland limitations aren t the biggest issue by far, but they do start to come come up now, especially in the scientific space with Ubuntu having switched to Wayland by default. For application authors there is often no way to address these issues. Many scientists do not even understand why their Python script that creates some GUIs suddenly behaves weirdly because Qt is now using the Wayland backend on Ubuntu instead of X11. They do not know the difference and also do not want to deal with these details even though they may be programmers as well, the real goal is not to fiddle with the display server, but to get to a scientific result somehow. Another issue is portability layers like Wine which need to run Windows applications as-is on Wayland. Apparently Wine s Wayland driver has some heuristics to make window positioning work (and I am amazed by the work done on this!), but that can only go so far.

A way out? So, how would we actually solve this? Fundamentally, this excessively long blog post boils down to just one essential question: Do we want to force applications to submit to a UX paradigm unconditionally, potentially loosing out on application ports or keeping apps on X11 eternally, or do we want to throw them some rope to get as many applications ported over to Wayland, even through we might sacrifice some protocol purity? I think we really have to answer that to make the discussions on wayland-protocols a lot less grueling. This question can be answered at the wayland-protocols level, but even more so it must be answered by the individual desktops and compositors. If the answer for your environment turns out to be Yes, we want the Wayland protocol to be more opinionated and will not make any compromises for application portability , then your desktop/compositor should just immediately NACK protocols that add something like this and you simply shouldn t engage in the discussion, as you reject the very premise of the new protocol: That it has any merit to exist and is needed in the first place. In this case contributors to Wayland and application authors also know where you stand, and a lot of debate is skipped. Of course, if application authors want to support your environment, you are basically asking them now to rewrite their UI, which they may or may not do. But at least they know what to expect and how to target your environment. If the answer turns out to be We do want some portability , the next question obviously becomes where the line should be drawn and which changes are acceptable and which aren t. We can t blindly copy all X11 behavior, some porting work to Wayland is simply inevitable. Some written rules for that might be nice, but probably more importantly, if you agree fundamentally that there is an issue to be fixed, please engage in the discussions for the respective MRs! We for sure do not want to repeat X11 mistakes, and I am certain that we can implement protocols which provide the required functionality in a way that is a nice compromise in allowing applications a path forward into the Wayland future, while also being as good as possible and improving upon X11. For example, the toplevel-icon proposal is already a lot better than anything X11 ever had. Relaxing ACK requirements for the ext namespace is also a good proposed administrative change, as it allows some compositors to add features they want to support to the shared repository easier, while also not mandating them for others. In my opinion, it would allow for a lot less friction between the two different ideas of how Wayland protocol development should work. Some compositors could move forward and support more protocol extensions, while more restrictive compositors could support less things. Applications can detect supported protocols at launch and change their behavior accordingly (ideally even abstracted by toolkits). You may now say that a lot of apps are ported, so surely this issue can not be that bad. And yes, what Wayland provides today may be enough for 80-90% of all apps. But what I hope the detour into the research lab has done is convince you that this smaller percentage of apps matters. A lot. And that it may be worthwhile to support them. To end on a positive note: When it came to porting concrete apps over to Wayland, the only real showstoppers so far5 were the missing window-positioning and window-position-restore features. I encountered them when porting my own software, and I got the issue as feedback from colleagues and fellow engineers. In second place was UI testing and automation support, the window-icon issue was mentioned twice, but being a cosmetic issue it likely simply hurts people less and they can ignore it easier. What this means is that the majority of apps are already fine, and many others are very, very close! A Wayland future for everyone is within our grasp!  I will also bring my two protocol MRs to their conclusion for sure, because as application developers we need clarity on what the platform (either all desktops or even just a few) supports and will or will not support in future. And the only way to get something good done is by contribution and friendly discussion.

Footnotes
  1. Apologies for the clickbait-y title it comes with the subject
  2. When I talk about Wayland I mean the combined set of display server protocols and accepted protocol extensions, unless otherwise clarified.
  3. I would have picked a picture from our lab, but that would have needed permission first
  4. Qt has awesome platform issues pages, like for macOS and Linux/X11 which help with porting efforts, but Qt doesn t even list Linux/Wayland as supported platform. There is some information though, like window geometry peculiarities, which aren t particularly helpful when porting (but still essential to know).
  5. Besides issues with Nvidia hardware CUDA for simulations and machine-learning is pretty much everywhere, so Nvidia cards are common, which causes trouble on Wayland still. It is improving though.

10 January 2024

Colin Watson: Going freelance

I ve mentioned this in a couple of other places, but I realized I never got round to posting about it on my own blog rather than on other people s services. How remiss of me. Anyway: after much soul-searching, I decided a few months ago that it was time for me to move on from Canonical and the Launchpad team there. Nearly 20 years is a long time to spend at any company, and although there are a bunch of people I ll miss, Launchpad is in a reasonable state where I can let other people have a turn. I m now in business for myself as a freelance developer! My new company is Columbiform, and I m focusing on Debian packaging and custom Python development. My services page has some self-promotion on the sorts of things I can do. My first gig, and the one that made it viable to make this jump, is at Freexian where I m helping with an exciting infrastructure project that we hope will start making Debian developers lives easier in the near future. This is likely to take up most of my time at least through to the end of 2024, but I may have some spare cycles. Drop me a line if you have something where you think I could be a good fit, and we can have a talk about it.

7 January 2024

Jonathan McDowell: Free Software Activities for 2023

This year was hard from a personal and work point of view, which impacted the amount of Free Software bits I ended up doing - even when I had the time I often wasn t in the right head space to make progress on things. However writing this annual recap up has been a useful exercise, as I achieved more than I realised. For previous years see 2019, 2020, 2021 + 2022.

Conferences The only Free Software related conference I made it to this year was DebConf23 in Kochi, India. Changes with projects at work meant I couldn t justify anything work related. This year I m planning to make it to FOSDEM, and haven t made a decision on DebConf24 yet.

Debian Most of my contributions to Free software continue to happen within Debian. I started the year working on retrogaming with Kodi on Debian. I got this to a much better state for bookworm, with it being possible to run the bsnes-mercury emulator under Kodi using RetroArch. There are a few other libretro backends available for RetroArch, but Kodi needs some extra controller mappings packaged up first. Plenty of uploads were involved, though some of this was aligning all the dependencies and generally cleaning things up in iterations. I continued to work on a few packages within the Debian Electronics Packaging Team. OpenOCD produced a new release in time for the bookworm release, so I uploaded 0.12.0-1. There were a few minor sigrok cleanups - sigrok 0.3, libsigrokdecode 0.5.3-4 + libsigrok 0.5.2-4 / 0.5.2-5. While I didn t manage to get the work completed I did some renaming of the ESP8266 related packages - gcc-xtensa-lx106 (which saw a 13 upload pre-bookworm) has become gcc-xtensa (with 14) and binutils-xtensa-lx106 has become binutils-xtensa (with 6). Binary packages remain the same, but this is intended to allow for the generation of ESP32 compiler toolchains from the same source. onak saw 0.6.3-1 uploaded to match the upstream release. I also uploaded libgpg-error 1.47-1 (though I can claim no credit for any of the work in preparing the package) to help move things forward on updating gnupg2 in Debian. I NMUed tpm2-pkcs11 1.9.0-0.1 to fix some minor issues pre-bookworm release; I use this package myself to store my SSH key within my laptop TPM, so I care about it being in a decent state. sg3-utils also saw a bit of love with 1.46-2 + 1.46-3 - I don t work in the storage space these days, but I m still listed as an uploaded and there was an RC bug around the library package naming that I was qualified to fix and test pre-bookworm. Related to my retroarch work I sponsored uploads of mgba for Ryan Tandy: 0.10.0+dfsg-1, 0.10.0+dfsg-2, 0.10.1+dfsg-1, 0.10.2+dfsg-1, mgba 0.10.1+dfsg-1+deb12u1. As part of the Data Protection Team I responded to various inbound queries to that team, both from project members and those external to the project. I continue to keep an eye on Debian New Members, even though I m mostly inactive as an application manager - we generally seem to have enough available recently. Mostly my involvement is via Front Desk activities, helping out with queries to the team alias, and contributing to internal discussions as well as our panel at DebConf23. Finally the 3 month rotation for Debian Keyring continues to operate smoothly. I dealt with 2023.03.24, 2023.06.26, 2023.06.29, 2023.09.10, 2023.09.24 + 2023.12.24.

Linux I had a few minor patches accepted to the kernel this year. A pair of safexcel cleanups (improved error logging for firmware load fail and cleanup on load failure) came out of upgrading the kernel running on my RB5009. The rest were related to my work on repurposing my C.H.I.P.. The AXP209 driver needed extended to support GPIO3 (with associated DT schema update). That allowed Bluetooth to be enabled. Adding the AXP209 internal temperature ADC as an iio-hwmon node means it can be tracked using the normal sensor monitoring framework. And finally I added the pinmux settings for mmc2, which I use to support an external microSD slot on my C.H.I.P.

Personal projects 2023 saw another minor release of onak, 0.6.3, which resulted in a corresponding Debian upload (0.6.3-1). It has a couple of bug fixes (including a particularly annoying, if minor, one around systemd socket activation that felt very satisfying to get to the bottom of), but I still lack the time to do any of the major changes I would like to. I wrote listadmin3 to allow easy manipulation of moderation queues for Mailman3. It s basic, but it s drastically improved my timeliness on dealing with held messages.

3 January 2024

Jacob Adams: Fixing My Shell

For an embarassingly long time, my shell has unnecessarily tried to initialize a console font in every kind of interactive terminal. This leaves the following error message in my terminal:
Couldn't get a file descriptor referring to the console.
It even shows up twice when running tmux! Clearly I ve done something horrible to my configuration, and now I ve got to clean it up.

How does Shell Initialization Work? The precise files a shell reads at start-up is somewhat complex, and defined by this excellent chart 1: Shell Startup Flowchart For the purposes of what I m trying to fix, there are two paths that matter.
  • Interactive login shell startup
  • Interactive non-login shell startup
As you can see from the above, trying to distinguish these two paths in bash is an absolute mess. zsh, in contrast, is much cleaner and allows for a clear distinction between these two cases, with login shell configuration files as a superset of configuration files used by non-login shells.

How did we get here? I keep my configuration files in a config repository. Some time ago I got quite frustrated at this whole shell initialization thing, and just linked everything together in one profile file:
.mkshrc -> config/profile
.profile -> config/profile
This appears to have created this mess.

Move to ZSH I ve wanted to move to zsh for a while, and took this opportunity to do so. So my new configuration files are .zprofile and .zshrc instead of .mkshrc and .profile (though I m going to retain those symlinks to allow my old configurations to continue working). mksh is a nice simple shell, but using zsh here allows for more consistency between my home and $WORK environments, and will allow a lot more powerful extensions.

Updating my Prompt ZSH prompts use a totally different configuration via variable expansion. However, it also uses the PROMPT variable, so I set that to the needed values for zsh. There s an excellent ZSH prompt generator at https://zsh-prompt-generator.site that I used to get these variables, though I m sure they re in the zsh documentation somewhere as well. I wanted a simple prompt with user (%n), host (%m), and path (%d). I also wanted a % at the end to distinguish this from other shells.
PROMPT="%n@%m%d%% "

Fixing mksh prompts This worked but surprisingly mksh also looks at PROMPT, leaving my mksh prompt as the literal prompt string without expansion. Fixing this requires setting up a proper shrc and linking it to .mkshrc and .zshrc. I chose to move my existing aliases script to this file, as it also broke in non-login shells when moved to profile. Within this new shrc file we can check what shell we re running via $0:
if [ "$0" = "/bin/zsh" ]   [ "$0" = "zsh" ]   [ "$0" = "-zsh" ]
I chose to add plain zsh here in case I run it manually for whatever reason. I also added -zsh to support tmux as that s what it presents as $0. This also means you ll need to be careful to quote $0 or you ll get fun shell errors. There s probably a better way to do this, but I couldn t find something that was compatible with POSIX shell, which is what most of this has to be written in to be compatible with mksh and zsh2. We can then setup different prompts for each:
if [ "$0" = "/bin/zsh" ]   [ "$0" = "zsh" ]   [ "$0" = "-zsh" ]
then
	PROMPT="%n@%m%d%% "
else
	# Borrowed from
	# http://www.unixmantra.com/2013/05/setting-custom-prompt-in-ksh.html
	PS1='$(id -un)@$(hostname -s)$PWD$ '
fi

Setting Console Font in a Better Way I ve been setting console font via setfont in my .profile for a while. I m not sure where I picked this up, but it s not the right way. I even tried to only run this in a console with -t but that only checks that output is a terminal, not specifically a console.
if [ -t 1 ]
then
	setfont /usr/share/consolefonts/Lat15-Terminus20x10.psf.gz
fi
This also only runs once the console is logged into, instead of initializing it on boot. The correct way to set this up, on Debian-based systems, is reconfiguring console-setup like so:
dpkg-reconfigure console-setup
From there you get a menu of encoding, character set, font, and then font size to configure for your consoles.

VIM mode To enable VIM mode for ZSH, you simply need to set:
bindkeys -v
This allows you to edit your shell commands with basic VIM keybinds.

Getting back Ctrl + Left Arrow and Ctrl + Right Arrow Moving around one word at a time with Ctrl and the arrow keys is broken by vim mode unfortunately, so we ll need to re-enable it:
bindkey "^[[1;5C" forward-word
bindkey "^[[1;5D" backward-word

Why did it show up twice for tmux? Because tmux creates a login shell. Adding:
echo PROFILE
to profile and:
echo SHRC
to shrc confirms this with:
PROFILE
SHRC
SHRC
For now, profile sources shrc so that running twice is expected. But after this exploration and diagram, it s clear we don t need that for zsh. Removing this will break remote bash shells (see above diagram), but I can live without those on my development laptop. Removing that line results in the expected output for a new terminal:
SHRC
And the full output for a new tmux session or console:
PROFILE
SHRC
So finally we re back to a normal state! This post is a bit unfocused but I hope it helps someone else repair or enhance their shell environment. If you liked this4, or know of any other ways to manage this I could use, let me know at fixmyshell@tookmund.com.
  1. This chart comes from the excellent Shell Startup Scripts article by Peter Ward. I ve generated the SVG from the graphviz source linked in the article.
  2. Technically it has be compatible with Korn shell, but a quick google seems to suggest that that s actually a subset of POSIX shell.
  3. I use oh-my-zsh at $WORK but for now I m going to simplify my personal configuration. If I end up using a lot of plugins I ll reconsider this.
  4. Or if you ve found any typos or other issues that I should fix.

1 January 2024

Tim Retout: Prevent DOM-XSS with Trusted Types - a smarter DevSecOps approach

It can be incredibly easy for a frontend developer to accidentally write a client-side cross-site-scripting (DOM-XSS) security issue, and yet these are hard for security teams to detect. Vulnerability scanners are slow, and suffer from false positives. Can smarter collaboration between development, operations and security teams provide a way to eliminate these problems altogether? Google claims that Trusted Types has all but eliminated DOM-XSS exploits on those of their sites which have implemented it. Let s find out how this can work!

DOM-XSS vulnerabilities are easy to write, but hard for security teams to catch It is very easy to accidentally introduce a client-side XSS problem. As an example of what not to do, suppose you are setting an element s text to the current URL, on the client side:
// Don't do this
para.innerHTML = location.href;
Unfortunately, an attacker can now manipulate the URL (and e.g. send this link in a phishing email), and any HTML tags they add will be interpreted by the user s browser. This could potentially be used by the attacker to send private data to a different server. Detecting DOM-XSS using vulnerability scanning tools is challenging - typically this requires crawling each page of the website and attempting to detect problems such as the one above, but there is a significant risk of false positives, especially as the complexity of the logic increases. There are already ways to avoid these exploits developers should validate untrusted input before making use of it. There are libraries such as DOMPurify which can help with sanitization.1 However, if you are part of a security team with responsibility for preventing these issues, it can be complex to understand whether you are at risk. Different developer teams may be using different techniques and tools. It may be impossible for you to work closely with every developer so how can you know that the frontend team have used these libraries correctly?

Trusted Types closes the DevSecOps feedback loop for DOM-XSS, by allowing Ops and Security to verify good Developer practices Trusted Types enforces sanitization in the browser2, by requiring the web developer to assign a particular kind of JavaScript object rather than a native string to .innerHTML and other dangerous properties. Provided these special types are created in an appropriate way, then they can be trusted not to expose XSS problems. This approach will work with whichever tools the frontend developers have chosen to use, and detection of issues can be rolled out by infrastructure engineers without requiring frontend code changes.

Content Security Policy allows enforcement of security policies in the browser itself Because enforcing this safer approach in the browser for all websites would break backwards-compatibility, each website must opt-in through Content Security Policy headers. Content Security Policy (CSP) is a mechanism that allows web pages to restrict what actions a browser should execute on their page, and a way for the site to receive reports if the policy is violated. Diagram showing a browser communicating with a web server. Content-Security-Policy headers are returned by the URL “/”, and the browser reports any security violations to “/csp”. Figure 1: Content-Security-Policy browser communication This is revolutionary, because it allows servers to receive feedback in real time on errors that may be appearing in the browser s console.

Trusted Types can be rolled out incrementally, with continuous feedback Web.dev s article on Trusted Types explains how to safely roll out the feature using the features of CSP itself:
  • Deploy a CSP collector if you haven t already
  • Switch on CSP reports without enforcement (via Content-Security-Policy-Report-Only headers)
  • Iteratively review and fix the violations
  • Switch to enforcing mode when there are a low enough rate of reports
Static analysis in a continuous integration pipeline is also sensible you want to prevent regressions shipping in new releases before they trigger a flood of CSP reports. This will also give you a chance of finding any low-traffic vulnerable pages.

Smart security teams will use techniques like Trusted Types to eliminate entire classes of bugs at a time Rather than playing whack-a-mole with unreliable vulnerability scanning or bug bounties, techniques such as Trusted Types are truly in the spirit of Secure by Design build high quality in from the start of the engineering process, and do this in a way which closes the DevSecOps feedback loop between your Developer, Operations and Security teams.

  1. Sanitization libraries are especially needed when the examples become more complex, e.g. if the application must manipulate the input. DOMPurify version 1.0.9 also added Trusted Types support, so can still be used to help developers adopt this feature.
  2. Trusted Types has existed in Chrome and Edge since 2020, and should soon be coming to Firefox as well. However, it s not necessary to wait for Firefox or Safari to add support, because the large market share of Chrome and Edge will let you identify and fix your site s DOM-XSS issues, even if you do not set enforcing mode, and users of all browsers will benefit. Even so, it is great that Mozilla is now on board.

27 December 2023

Bits from Debian: Statement about the EU Cyber Resilience Act

Debian Public Statement about the EU Cyber Resilience Act and the Product Liability Directive The European Union is currently preparing a regulation "on horizontal cybersecurity requirements for products with digital elements" known as the Cyber Resilience Act (CRA). It is currently in the final "trilogue" phase of the legislative process. The act includes a set of essential cybersecurity and vulnerability handling requirements for manufacturers. It will require products to be accompanied by information and instructions to the user. Manufacturers will need to perform risk assessments and produce technical documentation and, for critical components, have third-party audits conducted. Discovered security issues will have to be reported to European authorities within 25 hours (1). The CRA will be followed up by the Product Liability Directive (PLD) which will introduce compulsory liability for software. While a lot of these regulations seem reasonable, the Debian project believes that there are grave problems for Free Software projects attached to them. Therefore, the Debian project issues the following statement:
  1. Free Software has always been a gift, freely given to society, to take and to use as seen fit, for whatever purpose. Free Software has proven to be an asset in our digital age and the proposed EU Cyber Resilience Act is going to be detrimental to it. a. As the Debian Social Contract states, our goal is "make the best system we can, so that free works will be widely distributed and used." Imposing requirements such as those proposed in the act makes it legally perilous for others to redistribute our work and endangers our commitment to "provide an integrated system of high-quality materials with no legal restrictions that would prevent such uses of the system". (2) b. Knowing whether software is commercial or not isn't feasible, neither in Debian nor in most free software projects - we don't track people's employment status or history, nor do we check who finances upstream projects (the original projects that we integrate in our operating system). c. If upstream projects stop making available their code for fear of being in the scope of CRA and its financial consequences, system security will actually get worse rather than better. d. Having to get legal advice before giving a gift to society will discourage many developers, especially those without a company or other organisation supporting them.
  2. Debian is well known for its security track record through practices of responsible disclosure and coordination with upstream developers and other Free Software projects. We aim to live up to the commitment made in the Debian Social Contract: "We will not hide problems." (3) a.The Free Software community has developed a fine-tuned, tried-and-tested system of responsible disclosure in case of security issues which will be overturned by the mandatory reporting to European authorities within 24 hours (Art. 11 CRA). b. Debian spends a lot of volunteering time on security issues, provides quick security updates and works closely together with upstream projects and in coordination with other vendors. To protect its users, Debian regularly participates in limited embargos to coordinate fixes to security issues so that all other major Linux distributions can also have a complete fix when the vulnerability is disclosed. c. Security issue tracking and remediation is intentionally decentralized and distributed. The reporting of security issues to ENISA and the intended propagation to other authorities and national administrations would collect all software vulnerabilities in one place. This greatly increases the risk of leaking information about vulnerabilities to threat actors, representing a threat for all the users around the world, including European citizens. d. Activists use Debian (e.g. through derivatives such as Tails), among other reasons, to protect themselves from authoritarian governments; handing threat actors exploits they can use for oppression is against what Debian stands for. e. Developers and companies will downplay security issues because a "security" issue now comes with legal implications. Less clarity on what is truly a security issue will hurt users by leaving them vulnerable.
  3. While proprietary software is developed behind closed doors, Free Software development is done in the open, transparent for everyone. To retain parity with proprietary software the open development process needs to be entirely exempt from CRA requirements, just as the development of software in private is. A "making available on the market" can only be considered after development is finished and the software is released.
  4. Even if only "commercial activities" are in the scope of CRA, the Free Software community - and as a consequence, everybody - will lose a lot of small projects. CRA will force many small enterprises and most probably all self employed developers out of business because they simply cannot fulfill the requirements imposed by CRA. Debian and other Linux distributions depend on their work. If accepted as it is, CRA will undermine not only an established community but also a thriving market. CRA needs an exemption for small businesses and, at the very least, solo-entrepreneurs.

Information about the voting process: Debian uses the Condorcet method for voting. Simplistically, plain Condorcets method can be stated like so : "Consider all possible two-way races between candidates. The Condorcet winner, if there is one, is the one candidate who can beat each other candidate in a two-way race with that candidate." The problem is that in complex elections, there may well be a circular relationship in which A beats B, B beats C, and C beats A. Most of the variations on Condorcet use various means of resolving the tie. Debian's variation is spelled out in the constitution, specifically, A.5(3) Sources: (1) CRA proposals and links & PLD proposals and links (2) Debian Social Contract No. 2, 3, and 4 (3) Debian Constitution

26 December 2023

Russ Allbery: krb5-strength 3.3

krb5-strength is a toolkit of plugins and support programs for password strength checking for Kerberos KDCs, either Heimdal or MIT. It also includes a password history mechanism for Heimdal KDCs. This is a maintenance release, since there hadn't been a new release since 2020. It contains the normal sort of build system and portability updates, and the routine bump in the number of hash iterations used for the history mechanism to protect against (some) brute force attacks. It also includes an RPM spec file contributed by Daria Phoebe Brashear, and some changes to the Perl dependencies to track current community recommendations. You can get the latest version from the krb5-strength distribution page.

25 December 2023

Sergio Talens-Oliag: GitLab CI/CD Tips: Automatic Versioning Using semantic-release

This post describes how I m using semantic-release on gitlab-ci to manage versioning automatically for different kinds of projects following a simple workflow (a develop branch where changes are added or merged to test new versions, a temporary release/#.#.# to generate the release candidate versions and a main branch where the final versions are published).

What is semantic-releaseIt is a Node.js application designed to manage project versioning information on Git Repositories using a Continuous integration system (in this post we will use gitlab-ci)

How does it workBy default semantic-release uses semver for versioning (release versions use the format MAJOR.MINOR.PATCH) and commit messages are parsed to determine the next version number to publish. If after analyzing the commits the version number has to be changed, the command updates the files we tell it to (i.e. the package.json file for nodejs projects and possibly a CHANGELOG.md file), creates a new commit with the changed files, creates a tag with the new version and pushes the changes to the repository. When running on a CI/CD system we usually generate the artifacts related to a release (a package, a container image, etc.) from the tag, as it includes the right version number and usually has passed all the required tests (it is a good idea to run the tests again in any case, as someone could create a tag manually or we could run extra jobs when building the final assets if they fail it is not a big issue anyway, numbers are cheap and infinite, so we can skip releases if needed).

Commit messages and versioningThe commit messages must follow a known format, the default module used to analyze them uses the angular git commit guidelines, but I prefer the conventional commits one, mainly because it s a lot easier to use when you want to update the MAJOR version. The commit message format used must be:
<type>(optional scope): <description>
[optional body]
[optional footer(s)]
The system supports three types of branches: release, maintenance and pre-release, but for now I m not using maintenance ones. The branches I use and their types are:
  • main as release branch (final versions are published from there)
  • develop as pre release branch (used to publish development and testing versions with the format #.#.#-SNAPSHOT.#)
  • release/#.#.# as pre release branches (they are created from develop to publish release candidate versions with the format #.#.#-rc.# and once they are merged with main they are deleted)
On the release branch (main) the version number is updated as follows:
  1. The MAJOR number is incremented if a commit with a BREAKING CHANGE: footer or an exclamation (!) after the type/scope is found in the list of commits found since the last version change (it looks for tags on the same branch).
  2. The MINOR number is incremented if the MAJOR number is not going to be changed and there is a commit with type feat in the commits found since the last version change.
  3. The PATCH number is incremented if neither the MAJOR nor the MINOR numbers are going to be changed and there is a commit with type fix in the the commits found since the last version change.
On the pre release branches (develop and release/#.#.#) the version and pre release numbers are always calculated from the last published version available on the branch (i. e. if we published version 1.3.2 on main we need to have the commit with that tag on the develop or release/#.#.# branch to get right what will be the next version). The version number is updated as follows:
  1. The MAJOR number is incremented if a commit with a BREAKING CHANGE: footer or an exclamation (!) after the type/scope is found in the list of commits found since the last released version.In our example it was 1.3.2 and the version is updated to 2.0.0-SNAPSHOT.1 or 2.0.0-rc.1 depending on the branch.
  2. The MINOR number is incremented if the MAJOR number is not going to be changed and there is a commit with type feat in the commits found since the last released version.In our example the release was 1.3.2 and the version is updated to 1.4.0-SNAPSHOT.1 or 1.4.0-rc.1 depending on the branch.
  3. The PATCH number is incremented if neither the MAJOR nor the MINOR numbers are going to be changed and there is a commit with type fix in the the commits found since the last version change.In our example the release was 1.3.2 and the version is updated to 1.3.3-SNAPSHOT.1 or 1.3.3-rc.1 depending on the branch.
  4. The pre release number is incremented if the MAJOR, MINOR and PATCH numbers are not going to be changed but there is a commit that would otherwise update the version (i.e. a fix on 1.3.3-SNAPSHOT.1 will set the version to 1.3.3-SNAPSHOT.2, a fix or feat on 1.4.0-rc.1 will set the version to 1.4.0-rc.2 an so on).

How do we manage its configurationAlthough the system is designed to work with nodejs projects, it can be used with multiple programming languages and project types. For nodejs projects the usual place to put the configuration is the project s package.json, but I prefer to use the .releaserc file instead. As I use a common set of CI templates, instead of using a .releaserc on each project I generate it on the fly on the jobs that need it, replacing values related to the project type and the current branch on a template using the tmpl command (lately I use a branch of my own fork while I wait for some feedback from upstream, as you will see on the Dockerfile).

Container used to run itAs we run the command on a gitlab-ci job we use the image built from the following Dockerfile:
Dockerfile
# Semantic release image
FROM golang:alpine AS tmpl-builder
#RUN go install github.com/krakozaure/tmpl@v0.4.0
RUN go install github.com/sto/tmpl@v0.4.0-sto.2
FROM node:lts-alpine
COPY --from=tmpl-builder /go/bin/tmpl /usr/local/bin/tmpl
RUN apk update &&\
  apk upgrade &&\
  apk add curl git jq openssh-keygen yq zip &&\
  npm install --location=global\
    conventional-changelog-conventionalcommits@6.1.0\
    @qiwi/multi-semantic-release@7.0.0\
    semantic-release@21.0.7\
    @semantic-release/changelog@6.0.3\
    semantic-release-export-data@1.0.1\
    @semantic-release/git@10.0.1\
    @semantic-release/gitlab@9.5.1\
    @semantic-release/release-notes-generator@11.0.4\
    semantic-release-replace-plugin@1.2.7\
    semver@7.5.4\
  &&\
  rm -rf /var/cache/apk/*
CMD ["/bin/sh"]

How and when is it executedThe job that runs semantic-release is executed when new commits are added to the develop, release/#.#.# or main branches (basically when something is merged or pushed) and after all tests have passed (we don t want to create a new version that does not compile or passes at least the unit tests). The job is something like the following:
semantic_release:
  image: $SEMANTIC_RELEASE_IMAGE
  rules:
    - if: '$CI_COMMIT_BRANCH =~ /^(develop main release\/\d+.\d+.\d+)$/'
      when: always
  stage: release
  before_script:
    - echo "Loading scripts.sh"
    - . $ASSETS_DIR/scripts.sh
  script:
    - sr_gen_releaserc_json
    - git_push_setup
    - semantic-release
Where the SEMANTIC_RELEASE_IMAGE variable contains the URI of the image built using the Dockerfile above and the sr_gen_releaserc_json and git_push_setup are functions defined on the $ASSETS_DIR/scripts.sh file:
  • The sr_gen_releaserc_json function generates the .releaserc.json file using the tmpl command.
  • The git_push_setup function configures git to allow pushing changes to the repository with the semantic-release command, optionally signing them with a SSH key.

The sr_gen_releaserc_json functionThe code for the sr_gen_releaserc_json function is the following:
sr_gen_releaserc_json()
 
  # Use nodejs as default project_type
  project_type="$ PROJECT_TYPE:-nodejs "
  # REGEX to match the rc_branch name
  rc_branch_regex='^release\/[0-9]\+\.[0-9]\+\.[0-9]\+$'
  # PATHS on the local ASSETS_DIR
  assets_dir="$ CI_PROJECT_DIR /$ ASSETS_DIR "
  sr_local_plugin="$ assets_dir /local-plugin.cjs"
  releaserc_tmpl="$ assets_dir /releaserc.json.tmpl"
  pipeline_runtime_values_yaml="/tmp/releaserc_values.yaml"
  pipeline_values_yaml="$ assets_dir /values_$ project_type _project.yaml"
  # Destination PATH
  releaserc_json=".releaserc.json"
  # Create an empty pipeline_values_yaml if missing
  test -f "$pipeline_values_yaml"   : >"$pipeline_values_yaml"
  # Create the pipeline_runtime_values_yaml file
  echo "branch: $ CI_COMMIT_BRANCH " >"$pipeline_runtime_values_yaml"
  echo "gitlab_url: $ CI_SERVER_URL " >"$pipeline_runtime_values_yaml"
  # Add the rc_branch name if we are on an rc_branch
  if [ "$(echo "$CI_COMMIT_BRANCH"   sed -ne "/$rc_branch_regex/ p ")" ]; then
    echo "rc_branch: $ CI_COMMIT_BRANCH " >>"$pipeline_runtime_values_yaml"
  elif [ "$(echo "$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME"  
      sed -ne "/$rc_branch_regex/ p ")" ]; then
    echo "rc_branch: $ CI_MERGE_REQUEST_SOURCE_BRANCH_NAME " \
      >>"$pipeline_runtime_values_yaml"
  fi
  echo "sr_local_plugin: $ sr_local_plugin " >>"$pipeline_runtime_values_yaml"
  # Create the releaserc_json file
  tmpl -f "$pipeline_runtime_values_yaml" -f "$pipeline_values_yaml" \
    "$releaserc_tmpl"   jq . >"$releaserc_json"
  # Remove the pipeline_runtime_values_yaml file
  rm -f "$pipeline_runtime_values_yaml"
  # Print the releaserc_json file
  print_file_collapsed "$releaserc_json"
  # --*-- BEG: NOTE --*--
  # Rename the package.json to ignore it when calling semantic release.
  # The idea is that the local-plugin renames it back on the first step of the
  # semantic-release process.
  # --*-- END: NOTE --*--
  if [ -f "package.json" ]; then
    echo "Renaming 'package.json' to 'package.json_disabled'"
    mv "package.json" "package.json_disabled"
  fi
 
Almost all the variables used on the function are defined by gitlab except the ASSETS_DIR and PROJECT_TYPE; in the complete pipelines the ASSETS_DIR is defined on a common file included by all the pipelines and the project type is defined on the .gitlab-ci.yml file of each project. If you review the code you will see that the file processed by the tmpl command is named releaserc.json.tmpl, its contents are shown here:
 
  "plugins": [
     - if .sr_local_plugin  
    "  .sr_local_plugin  ",
     - end  
    [
      "@semantic-release/commit-analyzer",
       
        "preset": "conventionalcommits",
        "releaseRules": [
            "breaking": true, "release": "major"  ,
            "revert": true, "release": "patch"  ,
            "type": "feat", "release": "minor"  ,
            "type": "fix", "release": "patch"  ,
            "type": "perf", "release": "patch"  
        ]
       
    ],
     - if .replacements  
    [
      "semantic-release-replace-plugin",
        "replacements":   .replacements   toJson    
    ],
     - end  
    "@semantic-release/release-notes-generator",
     - if eq .branch "main"  
    [
      "@semantic-release/changelog",
        "changelogFile": "CHANGELOG.md", "changelogTitle": "# Changelog"  
    ],
     - end  
    [
      "@semantic-release/git",
       
        "assets":   if .assets   .assets   toJson   else  []  end  ,
        "message": "ci(release): v$ nextRelease.version \n\n$ nextRelease.notes "
       
    ],
    [
      "@semantic-release/gitlab",
        "gitlabUrl": "  .gitlab_url  ", "successComment": false  
    ]
  ],
  "branches": [
      "name": "develop", "prerelease": "SNAPSHOT"  ,
     - if .rc_branch  
      "name": "  .rc_branch  ", "prerelease": "rc"  ,
     - end  
    "main"
  ]
 
The values used to process the template are defined on a file built on the fly (releaserc_values.yaml) that includes the following keys and values:
  • branch: the name of the current branch
  • gitlab_url: the URL of the gitlab server (the value is taken from the CI_SERVER_URL variable)
  • rc_branch: the name of the current rc branch; we only set the value if we are processing one because semantic-release only allows one branch to match the rc prefix and if we use a wildcard (i.e. release/*) but the users keep more than one release/#.#.# branch open at the same time the calls to semantic-release will fail for sure.
  • sr_local_plugin: the path to the local plugin we use (shown later)
The template also uses a values_$ project_type _project.yaml file that includes settings specific to the project type, the one for nodejs is as follows:
replacements:
  - files:
      - "package.json"
    from: "\"version\": \".*\""
    to: "\"version\": \"$ nextRelease.version \""
assets:
  - "CHANGELOG.md"
  - "package.json"
The replacements section is used to update the version field on the relevant files of the project (in our case the package.json file) and the assets section includes the files that will be committed to the repository when the release is published (looking at the template you can see that the CHANGELOG.md is only updated for the main branch, we do it this way because if we update the file on other branches it creates a merge nightmare and we are only interested on it for released versions anyway). The local plugin adds code to rename the package.json_disabled file to package.json if present and prints the last and next versions on the logs with a format that can be easily parsed using sed:
local-plugin.cjs
// Minimal plugin to:
// - rename the package.json_disabled file to package.json if present
// - log the semantic-release last & next versions
function verifyConditions(pluginConfig, context)  
  var fs = require('fs');
  if (fs.existsSync('package.json_disabled'))  
    fs.renameSync('package.json_disabled', 'package.json');
    context.logger.log( verifyConditions: renamed 'package.json_disabled' to 'package.json' );
   
 
function analyzeCommits(pluginConfig, context)  
  if (context.lastRelease && context.lastRelease.version)  
    context.logger.log( analyzeCommits: LAST_VERSION=$ context.lastRelease.version  );
   
 
function verifyRelease(pluginConfig, context)  
  if (context.nextRelease && context.nextRelease.version)  
    context.logger.log( verifyRelease: NEXT_VERSION=$ context.nextRelease.version  );
   
 
module.exports =  
  verifyConditions,
  analyzeCommits,
  verifyRelease
 

The git_push_setup functionThe code for the git_push_setup function is the following:
git_push_setup()
 
  # Update global credentials to allow git clone & push for all the group repos
  git config --global credential.helper store
  cat >"$HOME/.git-credentials" <<EOF
https://fake-user:$ GITLAB_REPOSITORY_TOKEN @gitlab.com
EOF
  # Define user name, mail and signing key for semantic-release
  user_name="$SR_USER_NAME"
  user_email="$SR_USER_EMAIL"
  ssh_signing_key="$SSH_SIGNING_KEY"
  # Export git user variables
  export GIT_AUTHOR_NAME="$user_name"
  export GIT_AUTHOR_EMAIL="$user_email"
  export GIT_COMMITTER_NAME="$user_name"
  export GIT_COMMITTER_EMAIL="$user_email"
  # Sign commits with ssh if there is a SSH_SIGNING_KEY variable
  if [ "$ssh_signing_key" ]; then
    echo "Configuring GIT to sign commits with SSH"
    ssh_keyfile="/tmp/.ssh-id"
    : >"$ssh_keyfile"
    chmod 0400 "$ssh_keyfile"
    echo "$ssh_signing_key"   tr -d '\r' >"$ssh_keyfile"
    git config gpg.format ssh
    git config user.signingkey "$ssh_keyfile"
    git config commit.gpgsign true
  fi
 
The function assumes that the GITLAB_REPOSITORY_TOKEN variable (set on the CI/CD variables section of the project or group we want) contains a token with read_repository and write_repository permissions on all the projects we are going to use this function. The SR_USER_NAME and SR_USER_EMAIL variables can be defined on a common file or the CI/CD variables section of the project or group we want to work with and the script assumes that the optional SSH_SIGNING_KEY is exported as a CI/CD default value of type variable (that is why the keyfile is created on the fly) and git is configured to use it if the variable is not empty.
Warning: Keep in mind that the variables GITLAB_REPOSITORY_TOKEN and SSH_SIGNING_KEY contain secrets, so probably is a good idea to make them protected (if you do that you have to make the develop, main and release/* branches protected too).
Warning: The semantic-release user has to be able to push to all the projects on those protected branches, it is a good idea to create a dedicated user and add it as a MAINTAINER for the projects we want (the MAINTAINERS need to be able to push to the branches), or, if you are using a Gitlab with a Premium license you can use the api to allow the semantic-release user to push to the protected branches without allowing it for any other user.

The semantic-release commandOnce we have the .releaserc file and the git configuration ready we run the semantic-release command. If the branch we are working with has one or more commits that will increment the version, the tool does the following (note that the steps are described are the ones executed if we use the configuration we have generated):
  1. It detects the commits that will increment the version and calculates the next version number.
  2. Generates the release notes for the version.
  3. Applies the replacements defined on the configuration (in our example updates the version field on the package.json file).
  4. Updates the CHANGELOG.md file adding the release notes if we are going to publish the file (when we are on the main branch).
  5. Creates a commit if all or some of the files listed on the assets key have changed and uses the commit message we have defined, replacing the variables for their current values.
  6. Creates a tag with the new version number and the release notes.
  7. As we are using the gitlab plugin after tagging it also creates a release on the project with the tag name and the release notes.

Notes about the git workflows and merges between branchesIt is very important to remember that semantic-release looks at the commits of a given branch when calculating the next version to publish, that has two important implications:
  1. On pre release branches we need to have the commit that includes the tag with the released version, if we don t have it the next version is not calculated correctly.
  2. It is a bad idea to squash commits when merging a branch to another one, if we do that we will lose the information semantic-release needs to calculate the next version and even if we use the right prefix for the squashed commit (fix, feat, ) we miss all the messages that would otherwise go to the CHANGELOG.md file.
To make sure that we have the right commits on the pre release branches we should merge the main branch changes into the develop one after each release tag is created; in my pipelines the fist job that processes a release tag creates a branch from the tag and an MR to merge it to develop. The important thing about that MR is that is must not be squashed, if we do that the tag commit will probably be lost, so we need to be careful. To merge the changes directly we can run the following code:
# Set the SR_TAG variable to the tag you want to process
SR_TAG="v1.3.2"
# Fetch all the changes
git fetch --all --prune
# Switch to the main branch
git switch main
# Pull all the changes
git pull
# Switch to the development branch
git switch develop
# Pull all the changes
git pull
# Create followup branch from tag
git switch -c "followup/$SR_TAG" "$SR_TAG"
# Change files manually & commit the changed files
git commit -a --untracked-files=no -m "ci(followup): $SR_TAG to develop"
# Switch to the development branch
git switch develop
# Merge the followup branch into the development one using the --no-ff option
git merge --no-ff "followup/$SR_TAG"
# Remove the followup branch
git branch -d "followup/$SR_TAG"
# Push the changes
git push
If we can t push directly to develop we can create a MR pushing the followup branch after committing the changes, but we have to make sure that we don t squash the commits when merging or it will not work as we want.

23 December 2023

Russ Allbery: Review: Bookshops & Bonedust

Review: Bookshops & Bonedust, by Travis Baldree
Series: Legends & Lattes #2
Publisher: Tor
Copyright: 2023
ISBN: 1-250-88611-2
Format: Kindle
Pages: 337
Bookshops & Bonedust is a prequel to the cozy fantasy Legends & Lattes. You can read them in either order, although the epilogue of Bookshops & Bonedust spoils (somewhat guessable) plot developments in Legends & Lattes. Viv is a new member of the mercenary troop Rackam's Ravens and is still possessed of more enthusiasm than sense. As the story opens, she charges well ahead of her allies and nearly gets killed by a pike through the leg. She survives, but her leg needs time to heal and she is not up to the further pursuit of a necromancer. Rackam pays for a room and a doctor in the small seaside town of Murk and leaves her there to recuperate. The Ravens will pick her up when they come back through town, whenever that is. Viv is very quickly bored out of her skull. On a whim, and after some failures to find something else to occupy her, she tries a run-down local bookstore and promptly puts her foot through the boardwalk outside it. That's the start of an improbable friendship with the proprietor, a rattkin named Fern with a knack for book recommendations and a serious cash flow problem. Viv, being Viv, soon decides to make herself useful. The good side and bad side of this book are the same: it's essentially the same book as Legends & Lattes, but this time with a bookstore. There's a medieval sword and sorcery setting, a wide variety of humanoid species, a local business that needs love and attention (this time because it's failing instead of new), a lurking villain, an improbable store animal (this time a gryphlet that I found less interesting than the cat of the coffee shop), and a whole lot of found family. It turns out I was happy to read that story again, and there were some things I liked better in this version. I find bookstores more interesting than coffee shops, and although Viv and Fern go through a similar process of copying features of a modern bookstore, this felt less strained than watching Viv reinvent the precise equipment and menu of a modern coffee shop in a fantasy world. Also, Fern is an absolute delight, probably my favorite character in either of the books. I love the way that she uses book recommendations as a way of asking questions and guessing at answers about other people. As with the first book, Baldree's world-building is utterly unconcerned with trying to follow the faux-medieval conventions of either sword and sorcery or D&D-style role-playing games. On one hand, I like this; most of that so-called medievalism is nonsense anyway, and there's no reason why fantasy with D&D-style species diversity should be set in a medieval world. On the other hand, this world seems exactly like a US small town except the tavern also has rooms for rent, there are roving magical armies, and everyone fights with swords for some reason. It feels weirdly anachronistic, and I can't tell if that's because I've been brainwashed into thinking fantasy has to be medievaloid or if it's a true criticism of the book. I was reminded somewhat of reading Jack McDevitt's SF novels, which are supposedly set in the far future but are indistinguishable from 1980s suburbia except with flying cars. The other oddity with this book is that the reader of the series knows Viv isn't going to stay. This is the problem with writing a second iteration of this story as a prequel. I see why Baldree did it the story wouldn't have worked if Viv were already established but it casts a bit of a pall over the cheeriness of the story. Baldree to his credit confronts this directly, weaves it into the relationships, and salvages it a bit more in the epilogue, but it gave the story a sort of preemptive wistfulness that was at odds with how I wanted to read it. But, despite that, the strength of this book are the characters. Viv is a good person who helps where she can, which sounds like a simple thing but is so restful to read about. This book features her first meeting with the gnome Gallina, who is always a delight. There are delicious baked goods from a dwarf, a grumpy doctor, a grumpier city guard, and a whole cast of people who felt complicated and normal and essentially decent. I'm not sure the fantasy elements do anything for this book, or this series, other than marketing and the convenience of a few plot devices. Even though one character literally disappears into a satchel, it felt like Baldree could have written roughly the same story as a contemporary novel without a hint of genre. But that's not really a complaint, since the marketing works. I would not have read this series if it had been contemporary novels, and I thoroughly enjoyed it. It's a slice of life novel about kind and decent people for readers who are bored by contemporary settings and would rather read fantasy. Works for me. I'm hoping Baldree finds other stories, since I'm not sure I want to read this one several more times, but twice was not too much. If you liked Legends & Lattes and are thinking "how can I get more of that," here's the book for you. If you haven't read Legends & Lattes, I think I would recommend reading this one first. It does many of the same things, it's a bit more polished, and then you can read Viv's adventures in internal chronological order. Rating: 8 out of 10

21 December 2023

Russ Allbery: Review: The Box

Review: The Box, by Marc Levinson
Publisher: Princeton University Press
Copyright: 2006, 2008
Printing: 2008
ISBN: 0-691-13640-8
Format: Trade paperback
Pages: 278
The shipping container as we know it is only about 65 years old. Shipping things in containers is obviously much older; we've been doing that for longer than we've had ships. But the standardized metal box, set on a rail car or loaded with hundreds of its indistinguishable siblings into an enormous, specially-designed cargo ship, became economically significant only recently. Today it is one of the oft-overlooked foundations of global supply chains. The startlingly low cost of container shipping is part of why so much of what US consumers buy comes from Asia, and why most complex machinery is assembled in multiple countries from parts gathered from a dizzying variety of sources. Marc Levinson's The Box is a history of container shipping, from its (arguable) beginnings in the trailer bodies loaded on Pan-Atlantic Steamship Corporation's Ideal-X in 1956 to just-in-time international supply chains in the 2000s. It's a popular history that falls on the academic side, with a full index and 60 pages of citations and other notes. (Per my normal convention, those pages aren't included in the sidebar page count.) The Box is organized mostly chronologically, but Levinson takes extended detours into labor relations and container standardization at the appropriate points in the timeline. The book is very US-centric. Asian, European, and Australian shipping is discussed mostly in relation to trade with the US, and Africa is barely mentioned. I don't have the background to know whether this is historically correct for container shipping or is an artifact of Levinson's focus. Many single-item popular histories focus on something that involves obvious technological innovation (paint pigments) or deep cultural resonance (salt) or at least entertaining quirkiness (punctuation marks, resignation letters). Shipping containers are important but simple and boring. The least interesting chapter in The Box covers container standardization, in which a whole bunch of people had boring meetings, wrote some things done, discovered many of the things they wrote down were dumb, wrote more things down, met with different people to have more meetings, published a standard that partly reflected the fixations of that one guy who is always involved in standards discussions, and then saw that standard be promptly ignored by the major market players. You may be wondering if that describes the whole book. It doesn't, but not because of the shipping containers. The Box is interesting because the process of economic change is interesting, and container shipping is almost entirely about business processes rather than technology. Levinson starts the substance of the book with a description of shipping before standardized containers. This was the most effective, and probably the most informative, chapter. Beyond some vague ideas picked up via cultural osmosis, I had no idea how cargo shipping worked. Levinson gives the reader a memorable feel for the sheer amount of physical labor involved in loading and unloading a ship with mixed cargo (what's called "breakbulk" cargo to distinguish it from bulk cargo like coal or wheat that fills an entire hold). It's not just the effort of hauling barrels, bales, or boxes with cranes or raw muscle power, although that is significant. It's also the need to touch every piece of cargo to move it, inventory it, warehouse it, and then load it on a truck or train. The idea of container shipping is widely attributed, including by Levinson, to Malcom McLean, a trucking magnate who became obsessed with the idea of what we now call intermodal transport: using the same container for goods on ships, railroads, and trucks so that the contents don't have to be unpacked and repacked at each transfer point. Levinson uses his career as an anchor for the story, from his acquisition of Pan-American Steamship Corporation to pursue his original idea (backed by private equity and debt, in a very modern twist), through his years running Sea-Land as the first successful major container shipper, and culminating in his disastrous attempted return to shipping by acquiring United States Lines. I am dubious of Great Man narratives in history books, and I think Levinson may be overselling McLean's role. Container shipping was an obvious idea that the industry had been talking about for decades. Even Levinson admits that, despite a few gestures at giving McLean central credit. Everyone involved in shipping understood that cargo handling was the most expensive and time-consuming part, and that if one could minimize cargo handling at the docks by loading and unloading full containers that didn't have to be opened, shipping costs would be much lower (and profits higher). The idea wasn't the hard part. McLean was the first person to pull it off at scale, thanks to some audacious economic risks and a willingness to throw sharp elbows and play politics, but it seems likely that someone else would have played that role if McLean hadn't existed. Container shipping didn't happen earlier because achieving that cost savings required a huge expenditure of capital and a major disruption of a transportation industry that wasn't interested in being disrupted. The ships had to be remodeled and eventually replaced; manufacturing had to change; railroad and trucking in theory had to change (in practice, intermodal transport; McLean's obsession, didn't happen at scale until much later); pricing had to be entirely reworked; logistical tracking of goods had to be done much differently; and significant amounts of extremely expensive equipment to load and unload heavy containers had to be designed, built, and installed. McLean's efforts proved the cost savings was real and compelling, but it still took two decades before the shipping industry reconstructed itself around containers. That interim period is where this history becomes a labor story, and that's where Levinson's biases become somewhat distracting. In the United States, loading and unloading of cargo ships was done by unionized longshoremen through a bizarre but complex and long-standing system of contract hiring. The cost savings of container shipping comes almost completely from the loss of work for longshoremen. It's a classic replacement of labor with capital; the work done by gangs of twenty or more longshoreman is instead done by a single crane operator at much higher speed and efficiency. The longshoreman unions therefore opposed containerization and launched numerous strikes and other labor actions to delay use of containers, force continued hiring that containers made unnecessary, or win buyouts and payoffs for current longshoremen. Levinson is trying to write a neutral history and occasionally shows some sympathy for longshoremen, but they still get the Luddite treatment in this book: the doomed reactionaries holding back progress. Longshoremen had a vigorous and powerful union that won better working conditions structured in ways that look absurd to outsiders, such as requiring that ships hire twice as many men as necessary so that half of them could get paid while not working. The unions also had a reputation for corruption that Levinson stresses constantly, and theft of breakbulk cargo during loading and warehousing was common. One of the interesting selling points for containers was that lossage from theft during shipping apparently decreased dramatically. It's obvious that the surface demand of the longshoremen unions, that either containers not be used or that just as many manual laborers be hired for container shipping as for earlier breakbulk shipping, was impossible, and that the profession as it existed in the 1950s was doomed. But beneath those facts, and the smoke screen of Levinson's obvious distaste for their unions, is a real question about what society owes workers whose jobs are eliminated by major shifts in business practices. That question of fairness becomes more pointed when one realizes that this shift was massively subsidized by US federal and local governments. McLean's Sea-Land benefited from direct government funding and subsidized navy surplus ships, massive port construction in New Jersey with public funds, and a sweetheart logistics contract from the US military to supply troops fighting the Vietnam War that was so generous that the return voyage was free and every container Sea-Land picked up from Japanese ports was pure profit. The US shipping industry was heavily government-supported, particularly in the early days when the labor conflicts were starting. Levinson notes all of this, but never draws the contrast between the massive support for shipping corporations and the complete lack of formal support for longshoremen. There are hard ethical questions about what society owes displaced workers even in a pure capitalist industry transformation, and this was very far from pure capitalism. The US government bankrolled large parts of the growth of container shipping, but the only way that longshoremen could get part of that money was through strikes to force payouts from private shipping companies. There are interesting questions of social and ethical history here that would require careful disentangling of the tendency of any group to oppose disruptive change and fairness questions of who gets government support and who doesn't. They will have to wait for another book; Levinson never mentions them. There were some things about this book that annoyed me, but overall it's a solid work of popular history and deserves its fame. Levinson's account is easy to follow, specific without being tedious, and backed by voluminous notes. It's not the most compelling story on its own merits; you have to have some interest in logistics and economics to justify reading the entire saga. But it's the sort of history that gives one a sense of the fractal complexity of any area of human endeavor, and I usually find those worth reading. Recommended if you like this sort of thing. Rating: 7 out of 10

20 December 2023

Ulrike Uhlig: How volunteer work in F/LOSS exacerbates pre-existing lines of oppression, and what that has to do with low diversity

This is a post I wrote in June 2022, but did not publish back then. After first publishing it in December 2023, a perfectionist insecure part of me unpublished it again. After receiving positive feedback, i slightly amended and republish it now. In this post, I talk about unpaid work in F/LOSS, taking on the example of hackathons, and why, in my opinion, the expectation of volunteer work is hurting diversity. Disclaimer: I don t have all the answers, only some ideas and questions.

Previous findings In 2006, the Flosspols survey searched to explain the role of gender in free/libre/open source software (F/LOSS) communities because an earlier [study] revealed a significant discrepancy in the proportion of men to women. It showed that just about 1.5% of F/LOSS community members were female at that time, compared with 28% in proprietary software (which is also a low number). Their key findings were, to name just a few:
  • that F/LOSS rewards the producing code rather than the producing software. It thereby puts most emphasis on a particular skill set. Other activities such as interface design or documentation are understood as less technical and therefore less prestigious.
  • The reliance on long hours of intensive computing in writing successful code means that men, who in general assume that time outside of waged labour is theirs , are freer to participate than women, who normally still assume a disproportionate amount of domestic responsibilities. Female F/LOSS participants, however, seem to be able to allocate a disproportionate larger share of their leisure time for their F/LOSS activities. This gives an indication that women who are not able to spend as much time on voluntary activities have difficulties to integrate into the community.
We also know from the 2016 Debian survey, published in 2021, that a majority of Debian contributors are employed, rather than being contractors, and rather than being students. Also, 95.5% of respondents to that study were men between the ages of 30 and 49, highly educated, with the largest groups coming from Germany, France, USA, and the UK. The study found that only 20% of the respondents were being paid to work on Debian. Half of these 20% estimate that the amount of work on Debian they are being paid for corresponds to less than 20% of the work they do there. On the other side, there are 14% of those who are being paid for Debian work who declared that 80-100% of the work they do in Debian is remunerated.

So, if a majority of people is not paid, why do they work on F/LOSS? Or: What are the incentives of free software? In 2021, Louis-Philippe V ronneau aka Pollo, who is not only a Debian Developer but also an economist, published his thesis What are the incentive structures of free software (The actual thesis was written in French). One very interesting finding Pollo pointed out is this one:
Indeed, while we have proven that there is a strong and significative correlation between the income and the participation in a free/libre software project, it is not possible for us to pronounce ourselves about the causality of this link.
In the French original text:
En effet, si nous avons prouv qu il existe une corr lation forte et significative entre le salaire et la participation un projet libre, il ne nous est pas possible de nous prononcer sur la causalit de ce lien.
Said differently, it is certain that there is a relationship between income and F/LOSS contribution, but it s unclear whether working on free/libre software ultimately helps finding a well paid job, or if having a well paid job is the cause enabling work on free/libre software. I would like to scratch this question a bit further, mostly relying on my own observations, experiences, and discussions with F/LOSS contributors.

Volunteer work is unpaid work We often hear of hackathons, hack weeks, or hackfests. I ve been at some such events myself, Tails organized one, the IETF regularly organizes hackathons, and last week (June 2022!) I saw an invitation for a hack week with the Torproject. This type of event generally last several days. While the people who organize these events are being paid by the organizations they work for, participants on the other hand are generally joining on a volunteer basis. Who can we expect to show up at this type of event under these circumstances as participants? To answer this question, I collected some ideas:
  • people who have an employer sponsoring their work
  • people who have a funder/grant sponsoring their work
  • people who have a high income and can take time off easily (in that regard, remember the Gender Pay Gap, women often earn less for the same work than men)
  • people who rely on family wealth (living off an inheritance, living on rights payments from a famous grandparent - I m not making these situations up, there are actual people in such financially favorable situations )
  • people who don t need much money because they don t have to pay rent or pay low rent (besides house owners that category includes people who live in squats or have social welfare paying for their rent, people who live with parents or caretakers)
  • people who don t need to do care work (for children, elderly family members, pets. Remember that most care work is still done by women.)
  • students who have financial support or are in a situation in which they do not yet need to generate a lot of income
  • people who otherwise have free time at their disposal
So, who, in your opinion, fits these unwritten requirements? Looking at this list, it s pretty clear to me why we d mostly find white men from the Global North, generally with higher education in hackathons and F/LOSS development. ( Great, they re a culture fit! ) Yes, there will also always be some people of marginalized groups who will attend such events because they expect to network, to find an internship, to find a better job in the future, or to add their participation to their curriculum. To me, this rings a bunch of alarm bells.

Low diversity in F/LOSS projects a mirror of the distribution of wealth I believe that the lack of diversity in F/LOSS is first of all a mirror of the distribution of wealth on a larger level. And by wealth I m referring to financial wealth as much as to social wealth in the sense of Bourdieu: Families of highly educated parents socially reproducing privilege by allowing their kids to attend better schools, supporting and guiding them in their choices of study and work, providing them with relations to internships acting as springboards into well paid jobs and so on. That said, we should ask ourselves as well:

Do F/LOSS projects exacerbate existing lines of oppression by relying on unpaid work? Let s look again at the causality question of Pollo s research (in my words):
It is unclear whether working on free/libre software ultimately helps finding a well paid job, or if having a well paid job is the cause enabling work on free/libre software.
Maybe we need to imagine this cause-effect relationship over time: as a student, without children and lots of free time, hopefully some money from the state or the family, people can spend time on F/LOSS, collect experience, earn recognition - and later find a well-paid job and make unpaid F/LOSS contributions into a hobby, cementing their status in the community, while at the same time generating a sense of well-being from working on the common good. This is a quite common scenario. As the Flosspols study revealed however, boys often get their own computer at the age of 14, while girls get one only at the age of 20. (These numbers might be slightly different now, and possibly many people don t own an actual laptop or desktop computer anymore, instead they own mobile devices which are not exactly inciting them to look behind the surface, take apart, learn, appropriate technology.) In any case, the above scenario does not allow for people who join F/LOSS later in life, eg. changing careers, to find their place. I believe that F/LOSS projects cannot expect to have more women, people of color, people from working class backgrounds, people from outside of Germany, France, USA, UK, Australia, and Canada on board as long as volunteer work is the status quo and waged labour an earned privilege.

Wait, are you criticizing all these wonderful people who sacrifice their free time to work towards common good? No, that s definitely not my intention, I m glad that F/LOSS exists, and the F/LOSS ecosystem has always represented a small utopia to me that is worth cherishing and nurturing. However, I think we still need to talk more about the lack of diversity, and investigate it further.

Some types of work are never being paid Besides free work at hacking events, let me also underline that a lot of work in F/LOSS is not considered payable work (yes, that s an oxymoron!). Which F/LOSS project for example, has ever paid translators a decent fee? Which project has ever considered that doing the social glue work, often done by women in the projects, is work that should be paid for? Which F/LOSS projects pay the people who do their Debian packaging rather than relying on yet another already well-paid white man who can afford doing this work for free all the while holding up how great the F/LOSS ecosystem is? And how many people on opensourcedesign jobs are looking to get their logo or website done for free? (Isn t that heart icon appealing to your altruistic empathy?) In my experience even F/LOSS projects which are trying to do the right thing by paying everyone the same amount of money per hour run into issues when it turns out that not all hours are equal and that some types of work do not qualify for remuneration at all or that the rules for the clocking of work are not universally applied in the same way by everyone.

Not every interaction should have a monetary value, but Some of you want to keep working without being paid, because that feels a bit like communism within capitalism, it makes you feel good to contribute to the greater good while not having the system determine your value over money. I hear you. I ve been there (and sometimes still am). But as long as we live in this system, even though we didn t choose to and maybe even despise it - communism is not about working for free, it s about getting paid equally and adequately. We may not think about it while under the age of 40 or 45, but working without adequate financial compensation, even half of the time, will ultimately result in not being able to care for oneself when sick, when old. And while this may not be an issue for people who inherit wealth, or have an otherwise safe economical background, eg. an academic salary, it is a huge problem and barrier for many people coming out of the working or service classes. (Oh and please, don t repeat the neoliberal lie that everyone can achieve whatever they aim for, if they just tried hard enough. French research shows that (in France) one has only 30% chance to become a class defector , and change social class upwards. But I managed to get out and move up, so everyone can! - well, if you believe that I m afraid you might be experiencing survivor bias.)

Not all bodies are equally able We should also be aware that not all of us can work with the same amount of energy either. There is yet another category of people who are excluded by the expectation of volunteer work, either because the waged labour they do already eats all of their energy, or because their bodies are not disposed to do that much work, for example because of mental health issues - such as depression-, or because of physical disabilities.

When organizing events relying on volunteer work please think about these things. Yes, you can tell people that they should ask their employer to pay them for attending a hackathon - but, as I ve hopefully shown, that would not do it for many people, especially newcomers. Instead, you could propose a fund to make it possible that people who would not normally attend can attend. DebConf is a good example for having done this for many years.

Conclusively I would like to urge free software projects that have a budget and directly pay some people from it to map where they rely on volunteer work and how this hurts diversity in their project. How do you or your project exacerbate pre-existing lines of oppression by granting or not granting monetary value to certain types of work? What is it that you take for granted? As always, I m curious about your feedback!

Worth a read These ideas are far from being new. Ashe Dryden s well-researched post The ethics of unpaid labor and the OSS community dates back to 2013 and is as important as it was ten years ago.

Russell Coker: Abuse and Free Software

People in positions of power can get away with mistreating other people. For any organisation to operate effectively there have to be mechanisms to address bad behaviour, both to help the organisation to achieve it s goals and to protect people who work for it. When an organisation operates in the public interest there is a greater reason to try to prevent bad behaviour as hurting people is not in the public interest. There are many forms of power, in the free software community a reputation for doing good technical work or work related to supporting software development gives some power and influence. We have seen examples of technical contributions used to excuse mistreatment of other people. The latest example of using a professional reputation to cover for abuse is Eben Moglen who has done some good legal work in the past while also treating members of the community badly (as documented by Matthew Garrett) [1]. Matthew has also documented how since 2016 Eben has not been doing good work for the free software community [2]. When news comes out about people who did good work while abusing other people they are usually defended with claims such as we can t lose the great contributions of this one person so it s worth losing the contributions of everyone who can t work with them , but in such situations it s very common to discover that they haven t been doing great work. This might be partly due to abusive people being better at self-promoting than actually doing good work and might be partly due to the fact that people who are afraid to speak out when they are doing good work might suddenly feel ready to go public if the person s work (defence) is decreasing. Bradley Kuhn s article about this situation is worth reading [3]. I don t have as much knowledge of the people involved in these disputes as Matthew, but I know enough about what is happening to be confident that Matthew s summary is accurate.

19 December 2023

Jonathan Dowland: William Basinski, Gateshead, 2022

I was looking over the list of live music I'd seen this year and realised that avante-garde composer William Basinski was actually last year and I had forgotten to write about it! In November 2022, Basinski headlined a night of performances which otherwise featured folk from the venue's "Arists in Residence" programme, with some affiliation to Newcastle's DIY music scene. Unfortunately we arrived too late to catch any of the other acts: partly because of the venue's sometimes doggiest insistence that people can only enter or leave the halls during intervals, and partly because the building works surrounding it had made the southern entrance effectively closed, so we had to walk to the north side of the building to get in1. Basinski was performing work from Lamentations. Basinski himself presented very unexpectedly to how I imagined him: he's got the Texas drawl, mediated through a fair amount of time spent in New York; very camp, in a glittery top; he kicked off the gig complaining about how tired he was, before a mini rant about the state of the world, riffing on a title from the album: Please, This Shit Has Got To Stop. We were in Hall 1, the larger of the two, and it was sparsely attended; a few people walked out mid performance. My gig-buddy Rob (a useful barometer for me on how things have gone) remarked that it was one of the most unique and unusual gigs he'd been to. I recognised snatches of the tracks from the album, but I'm hard-placed to name or sequence them, and they flowed into each other. I don't know how much of what we were hearing was "live" or what, if anything, was being decided during the performance, but Basinski's set-up included what looked like archaic tape equipment, with exposed loops of tape running between spools that could be interfered with by other tools. The encore was a unique, unreleased mix of Melancholia (II), which (making no apologies) Basinski hit play on before retiring backstage. I didn't take any photos. From memory, I think the venue had specifically stated filming or photos were not allowed for this performance. People at prior shows in New York and London filmed some of their shows; which were substantially similar: I've included embeds of them above. Lots of Basinski's work is on Bandcamp; the three pieces I particularly enjoy are Lamentations, Lamentations by William Basinski On Time Out of Time, On Time Out of Time by William Basinski and his best-known work, The Disintegration Loops. The Disintegration Loops by William Basinski

  1. I don't want to speak ill of the venue, though: The Sage, as it was, and the Glasshouse, as it is now known, has ended up being the venue I've attended most this year (2023), and it's such a civilised place: plenty of bars, great drinks selection (both alcoholic and not, hot and cold), loads of clean toilets, a free cloakroom, fantastic accoustics, polite staff; the list goes on.

17 December 2023

Dirk Eddelbuettel: littler 0.3.19 on CRAN: Several Updates

max-heap image The twentieth release of littler as a CRAN package landed a few minutes ago, following in the now seventeen year history (!!) as a package started by Jeff in 2006, and joined by me a few weeks later. littler is the first command-line interface for R as it predates Rscript. It allows for piping as well for shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript only began to do in recent years. littler lives on Linux and Unix, has its difficulties on macOS due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems as a default were a good idea?) and simply does not exist on Windows (yet the build system could be extended see RInside for an existence proof, and volunteers are welcome!). See the FAQ vignette on how to add it to your PATH. A few examples are highlighted at the Github repo:, as well as in the examples vignette. This release contains a fair number of small changes and improvements to some of the example scripts is run daily. The full change description follows.

Changes in littler version 0.3.19 (2023-12-17)
  • Changes in examples scripts
    • The help or usage text display for r2u.r, ttt.r, check.r has been improved, expanded or corrected, respectively
    • installDeps.r has a new argument for dependency selection
    • An initial 'single test file' runner tttf.r has been added
    • r2u.r has two new options for setting / varying the Debian build version of package that is built, and one for BioConductor builds, one for a 'dry run' build, and a new --compile option
    • installRSPM.r, installPPM.r, installP3M.r have been updates to reflect the name changes
    • installRub.r now understands 'package@universe' too
    • tt.r flips the default of the --effects switch

My CRANberries service provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page, and also on the package docs website. The code is available via the GitHub repo, from tarballs and now of course also from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as (in a day or two) Ubuntu binaries at CRAN thanks to the tireless Michael Rutter. Comments and suggestions are welcome at the GitHub repo. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

14 December 2023

Dirk Eddelbuettel: RProtoBuf 0.4.21 on CRAN: Updated Upstream Support!

An exciting new release 0.4.21 of RProtoBuf arrived on CRAN earlier today. RProtoBuf provides R with bindings for the Google Protocol Buffers ( ProtoBuf ) data encoding and serialization library used and released by Google, and deployed very widely in numerous projects as a language and operating-system agnostic protocol. ProtoBuf development, following what seemed like a multi-year lull, all of a sudden picked up again with a vengeance a little while ago. And the library releases we rely on for convenience and provided by the Linux distributions are lagging. So last summer we received an excellent, and focussed, pull request #93 offering to update the package to the newer ProtoBuf 22.0 and beyond. (Aside: When a library ditches its numbering scheme you know changes are for real . My Ubuntu 23.10 box is still at 3.21 in a different counting scheme .) But it wasn t until last weekend the issue ticket #95 by Sebastian ran into the same issue, but recognized it and contained a container recipe! So now all of a sudden we were able to build under a newer ProtoBuf which made accepting the PR #93 much easier! We added this as an additional continuous unit test, and made a few other smaller updates to documentation and style. The following section from the NEWS.Rd file has full details.

Changes in RProtoBuf version 0.4.21 (2022-12-13)
  • Package now builds with ProtoBuf >= 22.x thanks to Matteo Gianella (#93 addressing #92).
  • An Alpine 3.19-based workflow was added to test this in continuous integration thanks to a suggestion by Sebastian Meyer.
  • A large number of old-style .Call were updated (#96).
  • Several packaging, dcoumentation and testing items were updated.

Thanks to my CRANberries, there is a diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the quick overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

13 December 2023

Jonathan Dowland: equivalence problems with StreamGraph

I've been tackling an equivalence problem with rewritten programs in StrIoT, our proof-of-concept stream-processing system. The StrIoT Logical Optimiser applies a set of rewrite rules to a stream-processing program, generating a set of variants that can be reasoned about, ranked, and deployed. The problem I've been tackling is that a variant may appear to be semantically equivalent to another, but compare (with ==) as distinct. The issue relates to the design of our data-type representing programs, in particular, a consequence of our choice to outsource the structural aspect to a 3rd-party library (Algebra.Graph). The Graph library deems nodes that compare as equivalent (again with ==) to be the same. Since a stream-processing program may contain many operators which are equivalent, but distinct, we needed to add a field to our payload type to differentiate them: so we opted for an Integer field, vertexId (something I've described as a "wart" elsewhere) Here's a simplified example of our existing payload type, StreamVertex:
data StreamVertex = StreamVertex
      vertexId   :: Int
    , operator   :: StreamOperator
    , parameters :: [ExpQ]
    , intype     :: String
    , outtype    :: String
     
A rewrite rule might introduce or eliminate operators from a stream-processing program. For example, consider the rule which "hoists" a filter upstream from a merge operator. In pseudo-Haskell,
streamFilter p . streamMerge [a , b, ...]
=>
streamMerge [ streamFilter p a
            , streamFilter p b
            , ...]
The original streamFilter is removed, and new streamFilters are introduced, one per stream arriving at streamMerge. In general, rules may need to synthesise new operators, and thus new vertexIds. Another rewrite rule might perform the reverse operation. But the individual rules operate in isolation: and so, the program variant that results after applying a rule and then applying an inverse rule may not have the same vertexIds, or the same order of vertexIds, as the original program. I thought of the outline of two possible solutions to this. "well-numbered" StreamGraphs The first was to encode (and enforce) some rules about how vertexIds are used. If they always began from (say) 1, and were strictly-ascending from the source operator(s), and rewrite rules guaranteed that a "well numbered" input would be "well numbered" after rewriting, this would be sufficient to rule out a rewritten-but-semantically-equivalent program being considered distinct. The trouble with this approach is using properties of a numerical system built around vertexId as a stand-in for the real structural problem. I was not sure I could prove both that the stand-in system was sound and that it was a proper analogue for the underlying structural issue. It feels to me more that the choice to use an external library to encode the structure of a stream-processing program was the issue: the structure itself is a fundamental part of the semantics of the program. What if we had encoded the structure of programs within the same data-type? alternative data-type StrIoT programs are trees. The root is the sink node: there is always exactly one. There can be multiple source (leaf nodes), but they always converge. Operators can have multiple inputs (including zero). The root node has no output, but all other operators have exactly one. I explored transforming StreamVertex into a tree by adding a field representing incoming streams, and dispensing with Graph and vertexId. Something like this
data StreamProg = StreamProg StreamOperator [Exp] String String [StreamProg]
A uni-directional transformation from Graph StreamVertex to StreamProg is all that's needed to implement something like ==, so we don't need to keep track of vertexId mappings. Unfortunately, we can't fix the actual Eq (Graph StreamVertex) implementation this way: it delegates to Eq StreamVertex, and we just don't have enough information to fix the problem at that level. But, we can write a separate graphEq and use that instead where we need to. could I go further? Spoiler: I haven't. But I've been sorely tempted. We still have a separate StreamOperator type, which it would be nice to fold in; and we still have to use a list around the incoming nodes, since different operators accept different numbers of incoming streams. It would be better to encode the correct valences in the type. In 2020 I explored iteratively reducing the StreamVertex data-type to try and get it as close as possible to the ideal end-user API: simple functions. I wrote about one step along that path in Template Haskell and Stream-processing programs, but concluded that, since this was not my main PhD focus, I wouldn't go further. But it was nagging at my subconcious ever since. I allowed myself a couple of days exploring some advanced concepts including typed Template Haskell (that has had some developments since 2020), generalised abstract data types (GADTs) and more generic programming to see what could be achieved. I'll summarise all that in the next blog post.

Melissa Wen: 15 Tips for Debugging Issues in the AMD Display Kernel Driver

A self-help guide for examining and debugging the AMD display driver within the Linux kernel/DRM subsystem. It s based on my experience as an external developer working on the driver, and are shared with the goal of helping others navigate the driver code. Acknowledgments: These tips were gathered thanks to the countless help received from AMD developers during the driver development process. The list below was obtained by examining open source code, reviewing public documentation, playing with tools, asking in public forums and also with the help of my former GSoC mentor, Rodrigo Siqueira.

Pre-Debugging Steps: Before diving into an issue, it s crucial to perform two essential steps: 1) Check the latest changes: Ensure you re working with the latest AMD driver modifications located in the amd-staging-drm-next branch maintained by Alex Deucher. You may also find bug fixes for newer kernel versions on branches that have the name pattern drm-fixes-<date>. 2) Examine the issue tracker: Confirm that your issue isn t already documented and addressed in the AMD display driver issue tracker. If you find a similar issue, you can team up with others and speed up the debugging process.

Understanding the issue: Do you really need to change this? Where should you start looking for changes? 3) Is the issue in the AMD kernel driver or in the userspace?: Identifying the source of the issue is essential regardless of the GPU vendor. Sometimes this can be challenging so here are some helpful tips:
  • Record the screen: Capture the screen using a recording app while experiencing the issue. If the bug appears in the capture, it s likely a userspace issue, not the kernel display driver.
  • Analyze the dmesg log: Look for error messages related to the display driver in the dmesg log. If the error message appears before the message [drm] Display Core v... , it s not likely a display driver issue. If this message doesn t appear in your log, the display driver wasn t fully loaded and you will see a notification that something went wrong here.
4) AMD Display Manager vs. AMD Display Core: The AMD display driver consists of two components:
  • Display Manager (DM): This component interacts directly with the Linux DRM infrastructure. Occasionally, issues can arise from misinterpretations of DRM properties or features. If the issue doesn t occur on other platforms with the same AMD hardware - for example, only happens on Linux but not on Windows - it s more likely related to the AMD DM code.
  • Display Core (DC): This is the platform-agnostic part responsible for setting and programming hardware features. Modifications to the DC usually require validation on other platforms, like Windows, to avoid regressions.
5) Identify the DC HW family: Each AMD GPU has variations in its hardware architecture. Features and helpers differ between families, so determining the relevant code for your specific hardware is crucial.
  • Find GPU product information in Linux/AMD GPU documentation
  • Check the dmesg log for the Display Core version (since this commit in Linux kernel 6.3v). For example:
    • [drm] Display Core v3.2.241 initialized on DCN 2.1
    • [drm] Display Core v3.2.237 initialized on DCN 3.0.1

Investigating the relevant driver code: Keep from letting unrelated driver code to affect your investigation. 6) Narrow the code inspection down to one DC HW family: the relevant code resides in a directory named after the DC number. For example, the DCN 3.0.1 driver code is located at drivers/gpu/drm/amd/display/dc/dcn301. We all know that the AMD s shared code is huge and you can use these boundaries to rule out codes unrelated to your issue. 7) Newer families may inherit code from older ones: you can find dcn301 using code from dcn30, dcn20, dcn10 files. It s crucial to verify which hooks and helpers your driver utilizes to investigate the right portion. You can leverage ftrace for supplemental validation. To give an example, it was useful when I was updating DCN3 color mapping to correctly use their new post-blending color capabilities, such as: Additionally, you can use two different HW families to compare behaviours. If you see the issue in one but not in the other, you can compare the code and understand what has changed and if the implementation from a previous family doesn t fit well the new HW resources or design. You can also count on the help of the community on the Linux AMD issue tracker to validate your code on other hardware and/or systems. This approach helped me debug a 2-year-old issue where the cursor gamma adjustment was incorrect in DCN3 hardware, but working correctly for DCN2 family. I solved the issue in two steps, thanks for community feedback and validation: 8) Check the hardware capability screening in the driver: You can currently find a list of display hardware capabilities in the drivers/gpu/drm/amd/display/dc/dcn*/dcn*_resource.c file. More precisely in the dcn*_resource_construct() function. Using DCN301 for illustration, here is the list of its hardware caps:
	/*************************************************
	 *  Resource + asic cap harcoding                *
	 *************************************************/
	pool->base.underlay_pipe_index = NO_UNDERLAY_PIPE;
	pool->base.pipe_count = pool->base.res_cap->num_timing_generator;
	pool->base.mpcc_count = pool->base.res_cap->num_timing_generator;
	dc->caps.max_downscale_ratio = 600;
	dc->caps.i2c_speed_in_khz = 100;
	dc->caps.i2c_speed_in_khz_hdcp = 5; /*1.4 w/a enabled by default*/
	dc->caps.max_cursor_size = 256;
	dc->caps.min_horizontal_blanking_period = 80;
	dc->caps.dmdata_alloc_size = 2048;
	dc->caps.max_slave_planes = 2;
	dc->caps.max_slave_yuv_planes = 2;
	dc->caps.max_slave_rgb_planes = 2;
	dc->caps.is_apu = true;
	dc->caps.post_blend_color_processing = true;
	dc->caps.force_dp_tps4_for_cp2520 = true;
	dc->caps.extended_aux_timeout_support = true;
	dc->caps.dmcub_support = true;
	/* Color pipeline capabilities */
	dc->caps.color.dpp.dcn_arch = 1;
	dc->caps.color.dpp.input_lut_shared = 0;
	dc->caps.color.dpp.icsc = 1;
	dc->caps.color.dpp.dgam_ram = 0; // must use gamma_corr
	dc->caps.color.dpp.dgam_rom_caps.srgb = 1;
	dc->caps.color.dpp.dgam_rom_caps.bt2020 = 1;
	dc->caps.color.dpp.dgam_rom_caps.gamma2_2 = 1;
	dc->caps.color.dpp.dgam_rom_caps.pq = 1;
	dc->caps.color.dpp.dgam_rom_caps.hlg = 1;
	dc->caps.color.dpp.post_csc = 1;
	dc->caps.color.dpp.gamma_corr = 1;
	dc->caps.color.dpp.dgam_rom_for_yuv = 0;
	dc->caps.color.dpp.hw_3d_lut = 1;
	dc->caps.color.dpp.ogam_ram = 1;
	// no OGAM ROM on DCN301
	dc->caps.color.dpp.ogam_rom_caps.srgb = 0;
	dc->caps.color.dpp.ogam_rom_caps.bt2020 = 0;
	dc->caps.color.dpp.ogam_rom_caps.gamma2_2 = 0;
	dc->caps.color.dpp.ogam_rom_caps.pq = 0;
	dc->caps.color.dpp.ogam_rom_caps.hlg = 0;
	dc->caps.color.dpp.ocsc = 0;
	dc->caps.color.mpc.gamut_remap = 1;
	dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; //2
	dc->caps.color.mpc.ogam_ram = 1;
	dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
	dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
	dc->caps.color.mpc.ogam_rom_caps.gamma2_2 = 0;
	dc->caps.color.mpc.ogam_rom_caps.pq = 0;
	dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
	dc->caps.color.mpc.ocsc = 1;
	dc->caps.dp_hdmi21_pcon_support = true;
	/* read VBIOS LTTPR caps */
	if (ctx->dc_bios->funcs->get_lttpr_caps)  
		enum bp_result bp_query_result;
		uint8_t is_vbios_lttpr_enable = 0;
		bp_query_result = ctx->dc_bios->funcs->get_lttpr_caps(ctx->dc_bios, &is_vbios_lttpr_enable);
		dc->caps.vbios_lttpr_enable = (bp_query_result == BP_RESULT_OK) && !!is_vbios_lttpr_enable;
	 
	if (ctx->dc_bios->funcs->get_lttpr_interop)  
		enum bp_result bp_query_result;
		uint8_t is_vbios_interop_enabled = 0;
		bp_query_result = ctx->dc_bios->funcs->get_lttpr_interop(ctx->dc_bios, &is_vbios_interop_enabled);
		dc->caps.vbios_lttpr_aware = (bp_query_result == BP_RESULT_OK) && !!is_vbios_interop_enabled;
	 
Keep in mind that the documentation of color capabilities are available at the Linux kernel Documentation.

Understanding the development history: What has brought us to the current state? 9) Pinpoint relevant commits: Use git log and git blame to identify commits targeting the code section you re interested in. 10) Track regressions: If you re examining the amd-staging-drm-next branch, check for regressions between DC release versions. These are defined by DC_VER in the drivers/gpu/drm/amd/display/dc/dc.h file. Alternatively, find a commit with this format drm/amd/display: 3.2.221 that determines a display release. It s useful for bisecting. This information helps you understand how outdated your branch is and identify potential regressions. You can consider each DC_VER takes around one week to be bumped. Finally, check testing log of each release in the report provided on the amd-gfx mailing list, such as this one Tested-by: Daniel Wheeler:

Reducing the inspection area: Focus on what really matters. 11) Identify involved HW blocks: This helps isolate the issue. You can find more information about DCN HW blocks in the DCN Overview documentation. In summary:
  • Plane issues are closer to HUBP and DPP.
  • Blending/Stream issues are closer to MPC, OPP and OPTC. They are related to DRM CRTC subjects.
This information was useful when debugging a hardware rotation issue where the cursor plane got clipped off in the middle of the screen. Finally, the issue was addressed by two patches: 12) Issues around bandwidth (glitches) and clocks: May be affected by calculations done in these HW blocks and HW specific values. The recalculation equations are found in the DML folder. DML stands for Display Mode Library. It s in charge of all required configuration parameters supported by the hardware for multiple scenarios. See more in the AMD DC Overview kernel docs. It s a math library that optimally configures hardware to find the best balance between power efficiency and performance in a given scenario. Finding some clk variables that affect device behavior may be a sign of it. It s hard for a external developer to debug this part, since it involves information from HW specs and firmware programming that we don t have access. The best option is to provide all relevant debugging information you have and ask AMD developers to check the values from your suspicions.
  • Do a trick: If you suspect the power setup is degrading performance, try setting the amount of power supplied to the GPU to the maximum and see if it affects the system behavior with this command: sudo bash -c "echo high > /sys/class/drm/card0/device/power_dpm_force_performance_level"
I learned it when debugging glitches with hardware cursor rotation on Steam Deck. My first attempt was changing the clock calculation. In the end, Rodrigo Siqueira proposed the right solution targeting bandwidth in two steps:

Checking implicit programming and hardware limitations: Bring implicit programming to the level of consciousness and recognize hardware limitations. 13) Implicit update types: Check if the selected type for atomic update may affect your issue. The update type depends on the mode settings, since programming some modes demands more time for hardware processing. More details in the source code:
/* Surface update type is used by dc_update_surfaces_and_stream
 * The update type is determined at the very beginning of the function based
 * on parameters passed in and decides how much programming (or updating) is
 * going to be done during the call.
 *
 * UPDATE_TYPE_FAST is used for really fast updates that do not require much
 * logical calculations or hardware register programming. This update MUST be
 * ISR safe on windows. Currently fast update will only be used to flip surface
 * address.
 *
 * UPDATE_TYPE_MED is used for slower updates which require significant hw
 * re-programming however do not affect bandwidth consumption or clock
 * requirements. At present, this is the level at which front end updates
 * that do not require us to run bw_calcs happen. These are in/out transfer func
 * updates, viewport offset changes, recout size changes and pixel
depth changes.
 * This update can be done at ISR, but we want to minimize how often
this happens.
 *
 * UPDATE_TYPE_FULL is slow. Really slow. This requires us to recalculate our
 * bandwidth and clocks, possibly rearrange some pipes and reprogram
anything front
 * end related. Any time viewport dimensions, recout dimensions,
scaling ratios or
 * gamma need to be adjusted or pipe needs to be turned on (or
disconnected) we do
 * a full update. This cannot be done at ISR level and should be a rare event.
 * Unless someone is stress testing mpo enter/exit, playing with
colour or adjusting
 * underscan we don't expect to see this call at all.
 */
enum surface_update_type  
UPDATE_TYPE_FAST, /* super fast, safe to execute in isr */
UPDATE_TYPE_MED,  /* ISR safe, most of programming needed, no bw/clk change*/
UPDATE_TYPE_FULL, /* may need to shuffle resources */
 ;

Using tools: Observe the current state, validate your findings, continue improvements. 14) Use AMD tools to check hardware state and driver programming: help on understanding your driver settings and checking the behavior when changing those settings.
  • DC Visual confirmation: Check multiple planes and pipe split policy.
  • DTN logs: Check display hardware state, including rotation, size, format, underflow, blocks in use, color block values, etc.
  • UMR: Check ASIC info, register values, KMS state - links and elements (framebuffers, planes, CRTCs, connectors). Source: UMR project documentation
15) Use generic DRM/KMS tools:
  • IGT test tools: Use generic KMS tests or develop your own to isolate the issue in the kernel space. Compare results across different GPU vendors to understand their implementations and find potential solutions. Here AMD also has specific IGT tests for its GPUs that is expect to work without failures on any AMD GPU. You can check results of HW-specific tests using different display hardware families or you can compare expected differences between the generic workflow and AMD workflow.
  • drm_info: This tool summarizes the current state of a display driver (capabilities, properties and formats) per element of the DRM/KMS workflow. Output can be helpful when reporting bugs.

Don t give up! Debugging issues in the AMD display driver can be challenging, but by following these tips and leveraging available resources, you can significantly improve your chances of success. Worth mentioning: This blog post builds upon my talk, I m not an AMD expert, but presented at the 2022 XDC. It shares guidelines that helped me debug AMD display issues as an external developer of the driver. Open Source Display Driver: The Linux kernel/AMD display driver is open source, allowing you to actively contribute by addressing issues listed in the official tracker. Tackling existing issues or resolving your own can be a rewarding learning experience and a valuable contribution to the community. Additionally, the tracker serves as a valuable resource for finding similar bugs, troubleshooting tips, and suggestions from AMD developers. Finally, it s a platform for seeking help when needed. Remember, contributing to the open source community through issue resolution and collaboration is mutually beneficial for everyone involved.

6 December 2023

Reproducible Builds: Reproducible Builds in November 2023

Welcome to the November 2023 report from the Reproducible Builds project! In these reports we outline the most important things that we have been up to over the past month. As a rather rapid recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries (more).

Reproducible Builds Summit 2023 Between October 31st and November 2nd, we held our seventh Reproducible Builds Summit in Hamburg, Germany! Amazingly, the agenda and all notes from all sessions are all online many thanks to everyone who wrote notes from the sessions. As a followup on one idea, started at the summit, Alexander Couzens and Holger Levsen started work on a cache (or tailored front-end) for the snapshot.debian.org service. The general idea is that, when rebuilding Debian, you do not actually need the whole ~140TB of data from snapshot.debian.org; rather, only a very small subset of the packages are ever used for for building. It turns out, for amd64, arm64, armhf, i386, ppc64el, riscv64 and s390 for Debian trixie, unstable and experimental, this is only around 500GB ie. less than 1%. Although the new service not yet ready for usage, it has already provided a promising outlook in this regard. More information is available on https://rebuilder-snapshot.debian.net and we hope that this service becomes usable in the coming weeks. The adjacent picture shows a sticky note authored by Jan-Benedict Glaw at the summit in Hamburg, confirming Holger Levsen s theory that rebuilding all Debian packages needs a very small subset of packages, the text states that 69,200 packages (in Debian sid) list 24,850 packages in their .buildinfo files, in 8,0200 variations. This little piece of paper was the beginning of rebuilder-snapshot and is a direct outcome of the summit! The Reproducible Builds team would like to thank our event sponsors who include Mullvad VPN, openSUSE, Debian, Software Freedom Conservancy, Allotropia and Aspiration Tech.

Beyond Trusting FOSS presentation at SeaGL On November 4th, Vagrant Cascadian presented Beyond Trusting FOSS at SeaGL in Seattle, WA in the United States. Founded in 2013, SeaGL is a free, grassroots technical summit dedicated to spreading awareness and knowledge about free source software, hardware and culture. The summary of Vagrant s talk mentions that it will:
[ ] introduce the concepts of Reproducible Builds, including best practices for developing and releasing software, the tools available to help diagnose issues, and touch on progress towards solving decades-old deeply pervasive fundamental security issues Learn how to verify and demonstrate trust, rather than simply hoping everything is OK!
Germane to the contents of the talk, the slides for Vagrant s talk can be built reproducibly, resulting in a PDF with a SHA1 of cfde2f8a0b7e6ec9b85377eeac0661d728b70f34 when built on Debian bookworm and c21fab273232c550ce822c4b0d9988e6c49aa2c3 on Debian sid at the time of writing.

Human Factors in Software Supply Chain Security Marcel Fourn , Dominik Wermke, Sascha Fahl and Yasemin Acar have published an article in a Special Issue of the IEEE s Security & Privacy magazine. Entitled A Viewpoint on Human Factors in Software Supply Chain Security: A Research Agenda, the paper justifies the need for reproducible builds to reach developers and end-users specifically, and furthermore points out some under-researched topics that we have seen mentioned in interviews. An author pre-print of the article is available in PDF form.

Community updates On our mailing list this month:

openSUSE updates Bernhard M. Wiedemann has created a wiki page outlining an proposal to create a general-purpose Linux distribution which consists of 100% bit-reproducible packages albeit minus the embedded signature within RPM files. It would be based on openSUSE Tumbleweed or, if available, its Slowroll-variant. In addition, Bernhard posted another monthly update for his work elsewhere in openSUSE.

Ubuntu Launchpad now supports .buildinfo files Back in 2017, Steve Langasek filed a bug against Ubuntu s Launchpad code hosting platform to report that .changes files (artifacts of building Ubuntu and Debian packages) reference .buildinfo files that aren t actually exposed by Launchpad itself. This was causing issues when attempting to process .changes files with tools such as Lintian. However, it was noticed last month that, in early August of this year, Simon Quigley had resolved this issue, and .buildinfo files are now available from the Launchpad system.

PHP reproducibility updates There have been two updates from the PHP programming language this month. Firstly, the widely-deployed PHPUnit framework for the PHP programming language have recently released version 10.5.0, which introduces the inclusion of a composer.lock file, ensuring total reproducibility of the shipped binary file. Further details and the discussion that went into their particular implementation can be found on the associated GitHub pull request. In addition, the presentation Leveraging Nix in the PHP ecosystem has been given in late October at the PHP International Conference in Munich by Pol Dellaiera. While the video replay is not yet available, the (reproducible) presentation slides and speaker notes are available.

diffoscope changes diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes, including:
  • Improving DOS/MBR extraction by adding support for 7z. [ ]
  • Adding a missing RequiredToolNotFound import. [ ]
  • As a UI/UX improvement, try and avoid printing an extended traceback if diffoscope runs out of memory. [ ]
  • Mark diffoscope as stable on PyPI.org. [ ]
  • Uploading version 252 to Debian unstable. [ ]

Website updates A huge number of notes were added to our website that were taken at our recent Reproducible Builds Summit held between October 31st and November 2nd in Hamburg, Germany. In particular, a big thanks to Arnout Engelen, Bernhard M. Wiedemann, Daan De Meyer, Evangelos Ribeiro Tzaras, Holger Levsen and Orhun Parmaks z. In addition to this, a number of other changes were made, including:

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In October, a number of changes were made by Holger Levsen:
  • Debian-related changes:
    • Track packages marked as Priority: important in a new package set. [ ][ ]
    • Stop scheduling packages that fail to build from source in bookworm [ ] and bullseye. [ ].
    • Add old releases dashboard link in web navigation. [ ]
    • Permit re-run of the pool_buildinfos script to be re-run for a specific year. [ ]
    • Grant jbglaw access to the osuosl4 node [ ][ ] along with lynxis [ ].
    • Increase RAM on the amd64 Ionos builders from 48 GiB to 64 GiB; thanks IONOS! [ ]
    • Move buster to archived suites. [ ][ ]
    • Reduce the number of arm64 architecture workers from 24 to 16 in order to improve stability [ ], reduce the workers for amd64 from 32 to 28 and, for i386, reduce from 12 down to 8 [ ].
    • Show the entire build history of each Debian package. [ ]
    • Stop scheduling already tested package/version combinations in Debian bookworm. [ ]
  • Snapshot service for rebuilders
    • Add an HTTP-based API endpoint. [ ][ ]
    • Add a Gunicorn instance to serve the HTTP API. [ ]
    • Add an NGINX config [ ][ ][ ][ ]
  • System-health:
    • Detect failures due to HTTP 503 Service Unavailable errors. [ ]
    • Detect failures to update package sets. [ ]
    • Detect unmet dependencies. (This usually occurs with builds of Debian live-build.) [ ]
  • Misc-related changes:
    • do install systemd-ommd on jenkins. [ ]
    • fix harmless typo in squid.conf for codethink04. [ ]
    • fixup: reproducible Debian: add gunicorn service to serve /api for rebuilder-snapshot.d.o. [ ]
    • Increase codethink04 s Squid cache_dir size setting to 16 GiB. [ ]
    • Don t install systemd-oomd as it unfortunately kills sshd [ ]
    • Use debootstrap from backports when commisioning nodes. [ ]
    • Add the live_build_debian_stretch_gnome, debsums-tests_buster and debsums-tests_buster jobs to the zombie list. [ ][ ]
    • Run jekyll build with the --watch argument when building the Reproducible Builds website. [ ]
    • Misc node maintenance. [ ][ ][ ]
Other changes were made as well, however, including Mattia Rizzolo fixing rc.local s Bash syntax so it can actually run [ ], commenting away some file cleanup code that is (potentially) deleting too much [ ] and fixing the html_brekages page for Debian package builds [ ]. Finally, diagnosed and submitted a patch to add a AddEncoding gzip .gz line to the tests.reproducible-builds.org Apache configuration so that Gzip files aren t re-compressed as Gzip which some clients can t deal with (as well as being a waste of time). [ ]

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

5 December 2023

Matthew Garrett: Why does Gnome fingerprint unlock not unlock the keyring?

There's a decent number of laptops with fingerprint readers that are supported by Linux, and Gnome has some nice integration to make use of that for authentication purposes. But if you log in with a fingerprint, the moment you start any app that wants to access stored passwords you'll get a prompt asking you to type in your password, which feels like it somewhat defeats the point. Mac users don't have this problem - authenticate with TouchID and all your passwords are available after login. Why the difference?

Fingerprint detection can be done in two primary ways. The first is that a fingerprint reader is effectively just a scanner - it passes a graphical representation of the fingerprint back to the OS and the OS decides whether or not it matches an enrolled finger. The second is for the fingerprint reader to make that determination itself, either storing a set of trusted fingerprints in its own storage or supporting being passed a set of encrypted images to compare against. Fprint supports both of these, but note that in both cases all that we get at the end of the day is a statement of "The fingerprint matched" or "The fingerprint didn't match" - we can't associate anything else with that.

Apple's solution involves wiring the fingerprint reader to a secure enclave, an independently running security chip that can store encrypted secrets or keys and only release them under pre-defined circumstances. Rather than the fingerprint reader providing information directly to the OS, it provides it to the secure enclave. If the fingerprint matches, the secure enclave can then provide some otherwise secret material to the OS. Critically, if the fingerprint doesn't match, the enclave will never release this material.

And that's the difference. When you perform TouchID authentication, the secure enclave can decide to release a secret that can be used to decrypt your keyring. We can't easily do this under Linux because we don't have an interface to store those secrets. The secret material can't just be stored on disk - that would allow anyone who had access to the disk to use that material to decrypt the keyring and get access to the passwords, defeating the object. We can't use the TPM because there's no secure communications channel between the fingerprint reader and the TPM, so we can't configure the TPM to release secrets only if an associated fingerprint is provided.

So the simple answer is that fingerprint unlock doesn't unlock the keyring because there's currently no secure way to do that. It's not intransigence on the part of the developers or a conspiracy to make life more annoying. It'd be great to fix it, but I don't see an easy way to do so at the moment.

comment count unavailable comments

Next.

Previous.